Psychotropic Serendipities

Serendipities abound in history of medicine, in our own time no less than in the past.  In the 15 years that followed the end of World War II, a period of special interest to me, the discovery of what we now consider modern psychiatric (or psychotropic) drugs is a striking case in point.

Researchers in the final years of the war and immediately thereafter were hardly looking for psychotropics.  They were looking for other things:  improved antihistamines; preservatives that would permit penicillin to hold up during transport to troops in Europe and Asia; and the development of antibiotics effective against penicillin-resistant microorganisms like the tubercle bacilli that caused tuberculosis.

When Frank Berger, a Czechoslovakian bacteriologist, fled to England in 1939, he gained work as as a refugee camp physician.  Then, in 1943, he was hired by a government laboratory in London and joined in the work that engaged so many British scientists of the time:  the purification and industrial production of penicillin.  Berger’s particular assignment was the search for a penicillin preservative; he was especially interested in finding an agent that would prevent the breakdown of penicillin by gram-negative bacteria (penicillinase) during shipment.  And with the synthesis of mephenesin in 1945, he achieved success – and then some.  Mephenesin not only preserved penicillin, but, in small-scale animal trials on toxicity begun at the end of 1951, it revealed something else:  On injection into mice, rats, and guinea pigs, the preservative produced deep muscle relaxation, a sleep-like state that Berger described in 1946 as “tranquillization.”[1]

Berger emigrated to the United States in 1947, and after a brief stint at the University of Rochester Medical School, became Director of Laboratories at Carter-Wallace in Cranbury, New Jersey.  There, joined by the chemist Bernard Ludwig, he developed a more potent and slowly metabolizing form of mephenesin.  The drug was meprobamate, the first minor tranquilizer, for which a patent was finally granted in 1955.  Released by Carter-Wallace as Miltown and by Wyeth (a licensee) as Equanil, it took the American market by storm.  In 1956, it leaped from less than 1% to nearly 70% of new tranquilizer prescriptions; in1957 more than 35 million prescriptions were sold, the equivalent of one per second. Meprobamate single-handedly transformed American medicine by transmuting the everyday stresses and strains of Everyman (and Everywoman) into pharmacologically treatable anxiety.  For general practitioners in particular it was a godsend.  “If generalists could not psychoanalyze their troubled patients,” the historian David Herzberg has observed, “they could at least ease worries with a pill, possibly preventing a minor condition from worsening into serious mental illness.”[2]  Not bad for a penicillin preservative.

In 1952, at the very time Berger was observing the “tranquillization” of small rodents injected with meprobamate,  Henri-Marie Laborit, a French naval surgeon working at the Val de Grâce military hospital outside Paris, published his first article on the usefulness of chlorpromazine (CPZ), a chlorinated form of the antihistamine Promazine, in surgical practice.  Laborit, who was working on the development of “artificial hibernation” as an anesthetic technique, found that the drug not only calmed surgical patients prior to the administration of anesthesia, but also prevented them from lapsing into shock during and after their operations.  The drug had been synthesized by the Rhône-Poulenc chemist Paul Charpentier at the end of 1951. Charpentier was searching for an improved antihistamine, but he quickly saw the drug’s possible usefulness as a potentiator of general anesthesia,[3] which indeed it proved to be.

Impressed with the drug’s effectiveness (in combination with other drugs as a “lytic cocktail”) in inducing relaxation – what he termed “euphoric quietude” – and in preventing shock, Laborit encouraged his colleague Joseph Hamon to try it on psychiatric patients.  It was subsequently taken up by the French psychiatrists Jean Delay and Pierre Deniker, who tried it on psychiatric patients at the Sainte-Anne mental hospital in Paris.  In six journal articles published in the spring and summer of 1952, they reported encouraging results, characterizing their patients’ slowing down of motor activity and emotional indifference as “neuroleptic syndrome” (from the Greek “that take the nerve”).  Thus was born, in retrospect, the first major tranquilizer, a drug far more effective than its predecessors (including morphine and scopolamine in combination) in controlling extreme agitation and relieving psychotic delusions and hallucinations.[4]

But only in retrospect.  At the time of the preliminary trials, the primary application of chlorpromazine remained unclear.  Rhône-Poulenc conducted clinical trials for a number of applications of the drug: to induce “hibernation” during surgery; as an anesthetic; as an antinausea drug (antiemetic) for seasickness; as a treatment for, respectively, burns, stress, infections, obesity,  Parkinson’s disease, and epilepsy.  When Smith, Kline, & French became the American licensee of the drug in early 1953, it planned to market it to American surgeons and psychiatrists alike, and it also took pains to license the drug as an antiemetic.  Only at the end of 1953 did it recognize the primary psychiatric use of the drug, which it released in May, 1954 as Thorazine.

Of course, the birth of modern antibiotic therapy begins with penicillin – the first of the wartime “miracle drugs.” And a miracle drug it was, with an antibacterial spectrum that encompassed strep and staph infections, pneumonia, syphilis and gonorrhea.  But the foregoing infections were all caused by gram-positive bacteria.  Penicillin did not touch the kind of gram-negative bacteria that caused tuberculosis.

The first wonder drug effective against TB was streptomycin, an actinomyces (a soil-dwelling, anaerobic bacteria) discovered by Salman Waksman and his doctoral student Albert Schatz at the Rutgers Agricultural Experiment Station in 1943.  Eight years later, in 1951, chemists working at Bayer Laboratories in Wuppertal, Germany, at Hoffman-La Roche in Basel, Switzerland, and at the Squibb Institute for Medical Research in New Brunswick, New Jersey simultaneously discovered a drug that was not only more effective in treating TB than streptomycin; it was also easier to administer and less likely to have serious side effects.  It was isoniazid, the final wonder drug in the war against TB.  In combination with streptomycin, it was more effective than either drug alone and  less likely to elicit treatment-resistant strains of the tubercle bacilli.

But here’s the thing:  A side-effect of isoniazid was its mood-elevating (or, in the lingo of the day, “psycho-stimulative”) effect.  Researchers conducting trials at Baylor University, the University of Minnesota, and Spain’s University of Cordoba reached the same conclusion:  The mood-elevating effect of isoniazid treatment among TB patients pointed to psychiatry as the primary site of its use.  Back in New York, Nathan Kline, an assistant professor of psychiatry at Columbia’s College of Physicians and Surgeons, learned about the “psycho-stimulative” effect of isoniazid from a report about animal experiments conducted at the Warner-Lambert Research Laboratories in Morris Plains, New Jersey.  Shortly thereafter, he began his own trial of isoniazid with patients at Rockland State Hospital in Orangeburg, New York, and published a report of his findings in 1957.

A year later the drug was brought to market as an antitubercular agent (Marsilid), even though it had been given to over 400,000 depressed patients by that time.  Its improved successor drug, iproniazid, was withdrawn from the U.S. market in 1961 owing to an alleged linkage to jaundice and kidney damage.  But isoniazid retains its place of honor among psychotropic serendipities:  It was the first of the monoamine oxidase inhibitors (MAOIs), potent antidepressants of which contemporary formulations (Marplan, Nardil) are used to treat atypical depressions, i.e., depressions refractory to newer and more benign antidepressants like the omnipresent SSRIs.[5]

Nathan Kline was likewise at hand to steer another ostensibly nonpsychotropic drug into psychiatric usage.  In 1952, reports of Rauwolfia serpentine, a plant root used in  India for hypertension (high blood pressure), snakebite, and “insanity,” reached the West and led to interest in the root’s potential as an antihypertensive.  A year later, chemists at the New Jersey headquarters of the Swiss pharmaceutical firm Ciba (later Ciba-Geigy and now Novartis) isolated an active salt, reserpine, from the root, and Kline, ever ready with the mental patients at Rockland State Hospital, obtained a sample to try on the hospital’s depressed patients.

Kline’s results were encouraging.  In short order, he was touting  reserpine as an “effective sedative for use in mental hospitals,” a finding reaffirmed later that year at a symposium at Ciba’s American headquarters in Summit, New Jersey, where staff pharmacologist F. F. Yonkman first used the term “tranquilizer” to characterize the drug’s mixture of sedation and well-being.[6]  As a major tranquilizer, reserpine never caught on like chlorpromazine, even though, throughout the 1950s, it “was far more frequently mentioned in the scientific literature than chlorpromazine.”[7]

So there we have it: the birth of modern psychopharmacology in the postwar era from research into penicillin preservatives, antihistamines, antibiotics, and antihypertensives.  Of course, serendipities operate in both directions:  drugs initially released as psychotropics sometimes fail miserably, only to find their worth outside of psychiatry.  We need only remember the history of thalidomide, released by the German firm Chemie Grűnenthal in 1957 as a sedative effective in treating anxiety, tension states, insomnia, and nausea.  This psychotropic found its initial market among pregnant women who took the drug to relieve first-trimester morning sickness.  Unbeknown to the drug manufacturer, the drug crossed the placental barrier and, tragically, compromised the pregnancies of many of these women.  Users of thalidomide delivered grossly deformed infants with truncated limbs, “flipper” babies, around 10,000 in all in Europe and Japan.  Only 40% of these infants survived.

This sad episode is well-known among historians, as is the heroic resistance of the FDA’s Frances Kelsey, who in 1960 fought off pressure from FDA administrators and executives at Richardson-Merrell, the American distributor, to release the drug in the U.S.  Less well known, perhaps, is the relicensing of the drug by the the FDA in 1998 (as Thalomid) for a totally nonpsychotropic usage: the treatment of certain complications of leprosy.  Prescribed off-label, it also proved helpful in treating AIDS wasting syndrome.  And beginning in the 2000s, it was used in combination with another drug, dexamethasone, to treat multiple myeloma (a cancer of the plasma cells). It received FDA approval as an anticancer agent in 2006.[8]

Seen thusly, serendipities are often rescue operations, the retrieving and reevaluating of long-abandoned industrial chemicals and of medications deemed inadequate for their intended purpose.  Small wonder that Librium, the first of the benzodiazepine class of minor tranquilizers, the drug that replaced meprobamate as the GP’s drug of choice in 1960, began its life as a new dye (benzheptoxidiazine) synthetized by the German chemists K. von Auwers and F. von Meyenburg in 1891. In the 1930s the Polish-American chemist Leo Sternbach returned to the chemical and synthesized related compounds in the continuing search for new “dyestuffs.”  Then, 20 years later, Sternbach, now a chemist at Hoffmann-La Roche in Nutley, New Jersey, returned to these compounds one final time to see if any of them might have psychiatric applications.  He found nothing of promise, but seven years later, in 1957, a coworker undertook a spring cleaning of the lab and found a variant that Sternbach had missed.  It turned out to be Librium.[9]  All hail to the resourceful minds that return to the dyes of yesteryear in search of the psychotropics of tomorrow – and to those who clean their labs with eyes wide open.

______________________

[1] F. M. Berger & W. Bradley, “The pharmacological properties of α:β dihdroxy (2-methylphenoxy)-γ- propane (Myanesin),” Brit. J. Pharmacol. Chemother., 1:265-272, 1946, at p. 265.

[2] D. Herzberg, Happy Pills in America: From Miltown to Prozac (Baltimore: Johns Hopkins, 2009), p. 35.  Cf. A. Tone, The Age of Anxiety: A History of America’s Turbulent Affair with Tranquilizers  (NY:  Basic Books, 2009), pp. 90-91.

[3] P. Charpentier, et al., “Recherches sur les diméthylaminopropyl –N phénothiazines substituées,” Comptes Rendus de l’Académie des Sciences, 235:59-60, 1952.

[4] On the discovery and early uses of chlorpromazine, see D. Healy, The Creation of Psychopharmacology (Cambridge: Harvard, 2002), pp. 77-101; F. Lopez-Munoz, et al., “History of the discovery and clinical introduction of chlorpromazine,”  Ann. Clin. Psychiat., 17:113-135, 2005; and T. A. Ban, “Fifty years chlorpromazine:  a historical perspective,” Neuropsychiat. Dis. & Treat., 3:495-500, 2007.

[5] On the development and marketing of isoniazid,  see H. F. Dowling, Fighting Infection: Conquests of the Twentieth Century (Cambridge: Harvard, 1977), p. 168; F. Ryan, The Forgotten Plague: How the Battle Against Tuberculosis was Won and Lost (Boston:  Little, Brown, 1992), p. 363; F. López-Munoz, et al., “On the clinical introduction of monoamine oxidase inhibitors, tricyclics, and tetracyclics. Part II: tricyclics and tetracyclics,” J. Clinical Psychopharm., 28:1-4, 2008; and Tone, Age of Anxiety, pp. 128-29.

[6] E. S. Valenstein, Blaming the Brain: The Truth about Drugs and Mental Health  (NY:  Free Press, 1998), p. 69; D. Healy, The Antidepressant Era (Cambridge: Harvard, 1997),  pp. 59-70;  D. Healy, Creation of Psychopharmacology, pp. 103-05.

[7] Healy, Creation of Psychopharmacology, p. 106.

[8] P. J. Hilts provides a readable overview of the thalidomide crisis in Protecting America’s Health:  The FDA, Business, and One Hundred Years of Regulation (NY:  Knopf, 2003), pp. 144-65.  On the subsequent relicensing of thalidomide for the treatment of leprosy in 1998 and its extensive off-label use, see S. Timmermans & M. Berg, The Gold Standard:  The Challenge of Evidence-Based Medicine and Standardization in Health Care. (Phila: Temple University Press, 2003), pp. 188-92.

[9] On the discovery of Librium, see Valenstein, Blaming the Brain, pp. 54-56; A. Tone,“Listening to the past: history, psychiatry, and anxiety,” Canad. J. Psychiat,, 50:373-380, 2005, at p. 377; and Tone, Age of Anxiety, pp. 126-40.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Injections and the Personal Touch

“Fear of the needle is usually acquired in childhood.  The psychic trauma to millions of the population produced in this way undoubtedly creates obstacles to good doctor-patient relationships, essential diagnostic procedures, and even life-saving therapy.”  Janet Travell, “Factors Affecting Pain of Injection” (1955)[1]

It was during the 1950s that the administration of hypodermic injections became a fraught enterprise and a topic of medical discussion.  With World War II over and American psychoanalysis suffusing postwar culture, including the cultures of medicine and psychiatry, it is unsurprising that physicians should look with new eyes at needle penetration and the fears it provoked.

In the nineteenth century, it had been all about pain relieved, sometimes miraculously, by injection of opioids.  Alongside the pain relieved, the pain of the injection was quite tolerable, even minor, a mere afterthought.  But in the mid-twentieth century pain per se took a back seat.  It was no longer about the painful condition that prompted injection.  Nor, really, was it about the pain of injection per se.  Psychodynamic thinking trumped both kinds of pain.  Increasingly, the issue before physicians, especially pediatricians, was about two things:  the anxiety attendant to injection pain and the lasting psychological damage that was all too often the legacy of needle pain.  Elimination of injection pain mattered, certainly, but it became the means to a psychological end.  Relieve the pain, they reasoned, and you eliminate the apprehension that exacerbates the pain and leaves deep psychic scars.

And so physicians were put on notice.  They were enjoined to experiment with numbing agents, coolant sprays, and various counterirritants to minimize the pain that children (and a good many adults) dreaded.  They were urged to keep their needles sharp and their patients’ skin surfaces dry.  Coolant sprays and antiseptic solutions that left a wet film, after all, could be carried into the skin as irritants.  For the muscular pain attendant to deeper injections, still stronger anesthetics, such as procaine, might be called for.  Physicians were also encouraged to reduce injection pain through new technologies, to use, for example, hyposprays and spring-loaded presto injectors.  Injection “technique” was a topic of discussion, especially for intramuscular injections of new wonder drugs such as streptomycin.  To be sure, new technologies and refined technique often failed to eliminate injection pain, especially when a large volume of solution was injected.  But, then again, pain relief was only a secondary goal. The point of the recommendations was primarily psychological, viz., to eliminate “the psychological reaction to piercing the skin.”[2]  It was anticipation of pain and the fear it engendered that jeopardized the doctor-patient relationship.

Psychoanalysts themselves, far removed from the everyday concerns of pediatricians, family physicians, and internists, had little to say on the topic.  They were content to call attention now and again to needle symbolism – invariably phallic in nature – in dreams and childhood memories.  In 1954, the child analyst Selma Fraiberg recalled “The theory of a two-and-a-half-year-old girl who developed a serious neurosis following an observation of coitus.  The child maintained that ‘the man made the hole,’ that the penis was forcibly thrust into the woman’s body like the hypodermic needle which had been thrust into her by the doctor when she was ill.”   Pity this two and a half year old.

Inferences about male sadism and castration anxiety were integral to this train of thought.  In 50s-era psychoanalysis, needle injection could symbolize not only “painful penetration,” but also the sadistic mutilation of a little girl by a male doctor.[3]  One wants to say that such strained psychoanalytic renderings are long dead and buried, but the fact is they still find their way into the literature from time to time, usually in the context of dream interpretations.  Here is one from 1994:

Recently Ms. K mentioned a dream in which she was diabetic and had little packets of desiccated insulin which were also like condoms.  All she needed now was a hypodermic syringe and a needle.  I pointed out the sexual nature of the dream with its theme of penetration; she then remembered that in the dream a woman friend had lifted her skirt and Ms. K had ‘whammed the needle right in’.[4]

Psychoanalytic interpretive priorities change over time, whether or not in therapeutically helpful ways being a perennial subject of debate.  By the 1990s, there was belated recognition that children’s needle phobias really didn’t call for analytic unraveling; they derived from the simple developmental fact that “children are exposed to hypodermic needles prior to their ability to understand what is going on,” and, as such, were more amenable to behavioral intervention than psychoanalytic treatment.  In the hospital setting, in particular, children needed simple strategies to reduce fear, not psychoanalytic interpretations.[5]

In 1950s medicine, psychoanalysis was at its best when its influence was subtle and indirect.  Samuel Sterns’s thoughtful consideration of the “emotional aspects” of treating patients with diabetes, published in the New England Journal of Medicine in 1953, is one such example.  Sterns worked out of the Abraham Rudy Diabetic Clinic of Boston’s Beth Israel Hospital, and he expressed indebtedness to the psychiatrist-psychoanalyst Grete Bibring and other members of her department for “many discussions” on the topic.

For most diabetics, of course, daily injections, self-administered whenever possible, were an absolute necessity.  And resistance to the injections, then as now, undercut treatment and resulted in poor glycemic control.[6]  How then to cope with the diabetic’s resistance to the needle, especially when “the injection of insulin is sometimes associated with a degree of anxiety, revulsion or fear that cannot be explained by the slight amount of pain involved.”[7]

Psychoanalysis provided a framework for overcoming the resistance.  It was not a matter of “simple reassurance” about insulin injections, Sterns observed, but – and it is Bibring’s voice we hear –

Recognition that apparently trivial and unfounded complaints about insulin injections may be based on deeply rooted anxiety for which the patient finds superficial rationalizations enables the physician to be more realistic and tolerant, and more successful in dealing with the problem.

Realism, tolerance, acceptance – this was the psychoanalytic path to overcoming the problem.  Physicians had to accept that diabetics’ anxiety about injections arose from “individual personalities,” and that each diabetic had his or her own adaptively necessary defenses.  Exhortation, criticism, direct confrontation – these responses had to be jettisoned on behalf of the kindness and understanding that would lead to a “positive interpersonal relation.”  This entailed an understanding of the patient’s transference to the physician:

It is particularly apparent that most of the reactions of juvenile diabetic patients to discipline, authoritativeness or criticism by the physician are really identical with their reactions to similar situations involving their parents.

And it included a  like-minded willingness to wrestle with the countertransference as an obstacle to treatment:

Even the occasional display of an untherapeutic attitude by the physician is enough to interfere with the development of a relation that will enable him to obtain maximal cooperation from the patient.  If the physician cultivates awareness of his own reactions to a difficult patient, he will be less easily drawn into retaliation or other negative behavior.[8]

The point of the analytic approach was to lay the groundwork for a “positive interpersonal relation” that would enlist the patient’s cooperation, and “not through anxiety or fear of the disease or the physician, but rather through the wish to be well and to gain the physician’s approval.”[9]  Sympathetic acceptance of the patient’s fears, of the defenses against those fears, of the life circumstances that led to the defenses – this was the ticket to the kind of positive transference relationship that the physician could use to his and the patient’s advantage.

_______________

Sterns’s paper of 1953 remains helpful to this day; it exemplifies the application of general psychoanalytic concepts to real-world medical problems that, as I suggested in the final chapter of Psychoanalysis at the Margins (2009), may breathe new life into a beleaguered profession.[10]  And yet, there is something missing from Sterns’s commentary.  Like other writers of his time, he was concerned lest needle anxiety become an obstacle to a good doctor-patient relationship.  Cultivate the relationship through sympathetic insight into the problem, he reasoned, and  the obstacle would diminish, perhaps even disappear.  What he ignored – indeed, what all these hospital- and clinic-based writers of the time ignored – is the manner in which a preexisting “good doctor-patient relationship” can defuse needle anxiety in the first place.

Nineteen fifty three, the year Sterns’s paper was published, was also the year my father, William Stepansky, opened his general practice at 16 East First Avenue, Trappe, Pennsylvania.  My father, as I have written, was a Compleat Physician in whom wide-ranging procedural competence commingled with a psychiatric temperament and deeply caring sensibility.  In the world of 1950s general practice, his office was, as Winnicott would say, a holding environment.  His patients loved him and relied on him to provide care.  If injections were part of the care, then ipso facto, they were caring interventions, whatever the momentary discomfort they entailed.

The forty years of my father’s practice spanned the first 40 years of my life, and, from the time I was around 13, we engaged in ongoing conversations about his patients and work.  Never do I recall his remarking on a case of needle anxiety, which is not to deny that any number of patients, child and adult, became anxious when injection time arrived.  My point is that he contained and managed their anxiety so that it never became clinically significant or worthy of mention.  At the opposite end of the spectrum, I know of elderly patients who welcomed him into their homes several times a week for injections – sometimes just vitamin B-12 shots – that amplified the human support he provided.

Before administering an injection, my father firmly but gently grasped the underside of the patient’s upper arm, and the patient felt held, often in just those ways in which he or she needed holding.  When one’s personal physician gives an injection, it may become, in some manner and to some extent, a personal injection.  And personal injections never hurt as much as injections impersonally given.  This simple truth gets lost in the contemporary literature that treats needle phobia as a psychiatric condition in need of focal treatment.   A primary care physician remarked to me recently that she relieved a patient’s severe anxiety about getting an injection simply by putting the injection on hold and sitting down and talking to the patient for five minutes.  In effect, she reframed the meaning of the injection by absorbing it into a newly established human connection. Would that all our doctors would sit down with us for five minutes and talk to us as friendly human beings, as fellow sufferers, before getting down to procedural business.

I myself am more fortunate than most.  For me the very anticipation of an injection has a positive valence.  It conjures up the sights and smells and tactile sensations of my father’s treatment room.  Now in my 60s, I still have in my nostrils the bracing scent of the alcohol he used to clean the injection site, and I still feel the firm, paternal grasp of his hand on my arm at the point of injection.  I once remarked to a physician that she could never administer an injection that would bother me,  because at the moment of penetration, her hand became my father’s.

Psychoanalysts who adopt the perspective of object relations theory speak of “transitional objects,” those special inanimate things that, especially in early life, stand in for our parents and help calm us in their absence.  Such objects become vested with soothing human properties; this is what imparts their “transitional” status.  In a paper of 2002, the analyst Julie Miller ventured the improbable view, based on a single case, that the needle of the heroin addict represents a “transitional object” that fosters a maternal connection the addict never experienced in early life.[11]  For me, I suppose, the needle is also a transitional object, albeit one that intersects with actual lived experience of a far more inspiriting nature.  To wit, when I receive an injection it is always with my father’s hand, life-affirming and healing.  It is the needle that attests to a paternal connection realized, in early life and in life thereafter.  It is an injection that stirs loving memories of my father’s medicine.   So how much can it hurt?

_______________

[1] J. Travell, “Factors affecting pain of injection,” JAMA, 58:368-371, 1955, at p. 368.

[2] J. Travell, “Factors affecting pain of injection,” op. cit.; L. C. Miller, “Control of pain of injection,” Bull Parenteral Drug A., 7:9-13,1953; E. P. MacKenzie, “Painless injections in pediatric practice,” J. Pediatr., 44:421, 1954; O. F. Thomas & G. Penrhyn Jones, “A note on injection pain with streptomycin,” Tubercle, 36:157-59, 1955; F. H. J. Figge & V. M. Gelhaus, “A new injector designed to minimize pain and apprehension of parenteral therapy,” JAMA, 160:1308-10, 1956.  There were also needle innovations in the realm of intravenous therapy, e.g., L. I. Gardner & J. T. Murphy, “New needle for pediatric scalp vein infusions,” Amer. J. Dis. Child., 80:303-04, 1950.

[3] S. Fraiberg, “A critical neurosis in a two-and-a-half-year girl,” Psychoanal. Study Child, 7:173-215, 1952, at p. 180; S. Fraiberg, “Tales of the discovery of the secret treasure,” Psychoanal. Study Child, 9:218-41, 1954, at p. 236.

[4] I. D. Buckingham, “The effect of hysterectomy on the subjective experience of orgasm,” J. Clin. Psychoanal., 3:607-12, 1994.

[5] D. Weston, “Response,” Int. J. Psychoanal., 78:1218-19, 1997, at p. 1219; C. Troupp, “Clinical commentary,” J. Child Psychother., 36:179-82, 2010.

[6] There is ample documentation of needle anxiety among present-day diabetics, e.g., A. Zambanini, et al., “Injection related anxiety in insulin-treated diabetes,” Diabetes Res. Clin. Prac., 46:239-46, 1999 and A. B. Hauber, et al., “Risking health to avoid injections: preferences of Canadians with type 2 diabetes,” Diabetes Care, 28:2243-45, 2005.

[7]S. Stearns, “Some emotional aspects of the treatment of diabetes mellitus and the role of the physician,” NEJM, 249:471-76, 1953, at p. 473.

[8] Ibid., p. 474.

[9] Ibid.

[10]P. E. Stepansky, Psychoanalysis at the Margins (NY: Other Press, 2009), pp. 287-313.

[11]J. Miller, “Heroin addiction: the needle as transitional object,” J. Amer. Acad. Psychoanal., 30:293-304, 20.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

It Was All About the Pain

“. . . and although the patient had long been a sufferer from dyspnea, chronic bronchitis, and embarrassed heart, we believed that the almost miraculous resurrection which took place would be permanent.  He died, however, on the second day.”   — Cameron MacDowall, “Intra-Peritoneal Injections in Cholera” (1883)[1]

Among the early British and American proponents of subcutaneous hypodermic injection, especially of liquefied morphine, the seeming miracle of instantaneous pain relief sufficed to bring physician and patient into attitudinal alignment.  We are a century removed from the psychoanalytic sensibility that encouraged physicians to explore the personal side of hypodermic injection and to develop strategies for overcoming patients’ anxieties about needle puncture, their “needle phobia.”

There is no need to read between the lines of nineteenth-century clinical reports to discern the convergence of physician delight and patient amazement at the immediate relief provided by hypodermic injection.  The lines themselves tell the story, and the story is all about the pain.  Patients who received hypodermic injections in the aftermath of Alexander Wood’s successful use of Daniel Ferguson’s  “elegant little syringe” were often in extremis.  Here is a woman of 40, who presented with a case of acute pleurisy (inflammation of the membrane around the lungs) in 1867:

The pain was most intense; great dyspnea [difficulty breathing] existed; sharp, lancinating pains at each rapid inspiration completely prostrated the patient, whose sufferings had been continuous for twelve hours.  About one-sixth of a grain of the acetate of morphia was used hypodermically, and with prompt relief, a few minutes only elapsing after its injection before its beneficial results followed.  The ordinary treatment being continued, a recovery was effected in a short time.[2]

Consider this “delicate elderly spinster” of 1879, who presented to her physician thusly:

I found her nearly unconscious, cramped all over body and legs, vomiting violently every minute or two, purging every few minutes, the purging being involuntary and under her.  She was showing the whites of the eyes, and the countenance was changed.  She was certainly all but gone.  Gave at once two-fifths of a grain of sulphate of morphia hypodermically.  She did not feel the prick of the needle in the least.[3]

And here is a surgeon from Wales looking in on a 48-year old gardener in severe abdominal pain at the Crickhowell Dispensary on August 1, 1882:

On my visiting him at 11:30 on the morning of the above date, I found him in great agony, in which condition his wife informed me he had been during the greater part of the previous night.  He implored me to do something for relief, saying he could endure the suffering no longer; and as I happened to have my hypodermic syringe in my pocket, I introduced into his arm four minims of a solution of acetate of morphia.  I then left him.[4]

A bit better off, one supposes – if only a bit – were patients who suffered  severe chronic pain, whether arthritic, gastrointestinal,  circulatory, or cancerous in nature.  They too were beneficiaries of the needle.  We encounter a patient with “the most intense pain in the knee-joint” owing to a six-year-long attack of gout.  Injection of a third of a grain of acetate of morphia was followed by “the most delightful results,” with “the patient expressing himself in glowing terms as to the efficacy and promptness of this new remedy.”  Instantaneous relief, compliments of the needle, enabled him to turn the corner; he “rallied rapidly, having none of the depression and debilitating effects, the resultant of long-continued pain, to recover from, as in former times.”[5]

So it was with patients with any number of ailments, however rare or nebulous in nature.  A 31-year-old woman was admitted to Massachusetts General Hospital in 1883 with what her physician diagnosed as multiple sarcomas (malignant skin tumors) covering her upper arms, breasts, and abdomen; she was given subcutaneous injections of Fowler’s Solution, an eighteenth-century tonic that was one percent arsenic.  Discharged from the hospital two weeks later, she self-administered daily injections of Fowler’s for another five months, by which time the lesions had cleared completely; a year later she remained “perfectly well to all appearance.”  In the 1890s, the decade when subcutaneous injections of various glandular extracts gripped the clinical imagination, it is hardly surprising to read that injection of liquefied gray matter of a sheep’s brain did remarkable things for patients suffering from nervous exhaustion (neurasthenia).  Indeed, its tonic effect comprised “increase of weight, appetite and weight, restoration of spirits and bien-être, disappearance of pain, sexual impotence and insomnia.”  At the other end of the psychophysical spectrum, patients who became manic, even violently delirious, during their bouts with acute illnesses such as pneumonia or rheumatic fever, “recovered in the ordinary way” after one or more injections of morphia, sometimes in conjunction with inhaled chloroform.[6]

Right through century’s end, the pain of disease was compounded by the pain of pre-injection treatment methods.  What the Boston surgeon Robert White, one of Wood’s first American followers, termed the “revolution in the healing art” signaled by the needle, addressed both poles of suffering.  Morphia’s “wonderful effects” on all kinds of pain — neuralgic pain, joint pain, digestive pain (dyspepsia), the pain of tumors and blockages – were heightened by the relative painlessness of injection.  Indeed, the revolutionary import of hypodermic injection, according to White, meant that “The painful and decidedly cruel endermic mode of applying medicines [i.e., absorption through the skin] may be entirely superseded, and the pain of a blistered surface completely avoided.”[7]  When it came to hemorrhoids, carbuncles, and small tumors, not to mention “foul and ill-conditioned ulcers,” hypodermic injections of carbolic acid provided “the only absolute and painless cure [original emphasis] of these exceedingly painful affections.”[8]

And what of the pain of the injection itself?  When it rates mention, it is only to put it in perspective, to underscore that “some pain at the moment of injection” gives way to “great relief from the pain of the disease” – a claim which, in this instance, pertained to alcohol solution injected in and around malignant tumors.[9]  Very rarely indeed does one find references to the pain of injection as a treatment-related consideration.[10]

Recognition of the addictive potential of repeated morphine injections barely dimmed the enthusiasm of many of the needle’s early proponents. Then, as now, physicians devised rationalizations for preferred treatment methods despite well-documented grounds for concern. They carved out diagnostic niches that, so they claimed, were exempt from mounting evidence of addiction.  A Melbourne surgeon who gave morphine injections to hospitalized parturients suffering from “puerperal eclampsia” (convulsions and coma following childbirth) found his patients able “to resist the dangerous effects of the drug; it seems to have no bad consequences in cases, in which, under ordinary circumstances, morphia would be strongly contra-indicated.” A physician from Virginia, who had treated puerperal convulsions with injectable morphine for 16 years, seconded this view.  “One would be surprised to see the effect of morphine in these cases,” he reported in 1887.  It was “as if bringing the dead to life.  It does not stupefy the patients, but renders them brighter.”[11]  A British surgeon stationed in Burma “cured” a patient of tetanus with repeated injections of atropine (belladonna), and held that his case “proved” that tetanus “induced” a special tolerance to an alkaloid known to have serious, even life-threatening, side effects.[12]  Physicians and patients alike stood in awe before a technology that not only heightened the effectiveness of the pharmacopeia of the time but also brought it to bear on an extended range of conditions.

Even failure to relieve suffering or postpone death spoke to the importance of hypodermic injection.  For even then, injections played a critical role in differential diagnosis: they enabled clinicians to differentiate, for example, “choleric diarrhea,” which morphine injections greatly helped, from, respectively, “malignant” (or Asiatic) cholera and common dysentery, which they helped not at all.[13]

To acknowledge that not all injections even temporarily relieved suffering or that not all injections were relatively painless was, in the context of nineteenth-century therapeutics, little more than a footnote.  Of course this was the case.  But it didn’t seem to matter.  There was an understandable wishfulness on the part of nineteenth-century physicians and patients about the therapeutic benefits of hypodermic injection per se, and this wishfulness arose from the fact that, prior to invention of the hypodermic syringe and soluble forms of morphine and other alkaloids, “almost miraculous resurrection” from intractable pain was not a possibility, or at least not a possibility arising from a physician’s quick procedural intervention.

For those physicians who, beginning in the late 1850s, began injecting morphine and other opioids to relieve severe pain, there was something magical about the whole process – and, yes, it calls to mind the quasi-magical status of injection and injectable medicine in some developing countries today.  The magic proceeded from the dramatic pain relief afforded by injection, certainly.  But it also arose from the realization, per Charles Hunter, that an injected opioid somehow found its way to the site of pain regardless of where it was injected.  It was pretty amazing.

The magic, paradoxically, derived from the new scientific understanding of medicinal therapeutic action in the final three decades of the nineteenth century.  The development of hypodermic injection is a small part of the triumph of scientific medicine, of a medicine of specific remedies for specific illnesses, of remedies increasingly developed in laboratories but bringing the fruits of laboratory science to the bedside.  We see the search for specific remedies in early trial-and-error efforts to find specific injectables and specific combinations of injectables for specific conditions – carbolic acid for hemorrhoids and carbuncles; morphine and atropia (belladonna) for puerperal convulsions; whisky and water for epidemic cholera; alcohol for tumors; ether for sciatica; liquefied sheep’s brain for nervous exhaustion; and on and on.

This approach signifies a primitive empiricism, but it is a proto-scientific empiricism nonetheless.  The very search for injectables specific to one or another condition is worlds removed from the Galenic medicine of the 1830s and ’40s, according to which all diseases were really variations of a single disease that had to do with the degree of tension or excitability in the blood vessels.

Despite the paucity of injectable medicines into the early twentieth century, hypodermic injection caught on because, despite the fantastical claims (to our ears) that abound in nineteenth-century medical journals, it was aligned with scientific medicine in ascendance.  Yes, the penetration of the needle was merely subcutaneous, but skin puncture was a portal to the blood stream and to organs deep inside the body.  In this manner, hypodermic injection partook of the exalted status of “heroic surgery” in the final quarter of the nineteenth century.[14]  The penetration of the needle, shallow though it was, stood in for a bold new kind of surgery, a surgery able to penetrate to the very anatomical substrate of human suffering.  Beginning in the late 1880s, certain forms of major surgery became recognizably modern, and the lowly needle was along for the ride.  The magic was all about the pain, but it was also all about the science.


[1] C. MacDowall, “Intra-peritoneal injections in cholera,” Lancet, 122:658-59, 1883, quoted at 658.

[2] T. L. Leavitt, “The hypodermic injection of morphia in gout and pleurisy,” Amer. J. Med. Sci., 55:109, 1868.

[3] W. Hardman, “Treatment of choleraic diarrhea by the hypodermic injection of morphia,” Lancet, 116:538-39, 1880, quoted at 539.

[4] P. E. Hill, “Morphia poisoning by hypodermic injection; recovery,” Lancet, 120:527-28, 1882, quoted at 527.

[5] Leavitt, “Hypodermic injection of morphia in gout and pleurisy,” op. cit.

[6] F. C. Shattuck, “Multiple sarcoma of the skin: treatment by hypodermic injections of Fowler’s solution; recovery,” Boston Med. Surg. J., 112:618-19, 1885; N.A., “Treatment of neurasthenia by transfusion (hypodermic injection) of nervous substance,” Boston Med. Surg. J., 126:273-74, 1892, quoted at 274; T. Churton, “Cases of acute maniacal delirium treated by inhalation of chloroform and hypodermic injection of morphia,” Lancet, 141:861-62, 1893.

[7] R. White, “Hypodermic injection of medicine, with a case,” Boston Med. Surg. J., 61:289-292, 1859, quoted at 290.

[8] N. B. Kennedy, “Carbolic acid injections in hemorrhoids and carbuncles,” JAMA, 6:529-30, 1886.

[9] E. Andrews, “The latest methods of treating carcinoma by hypodermic injection,” JAMA, 26:1159-60, 1897, quoted at 1159.

[10] For one such example, see NA, “The hypodermic injection of mercurials in the treatment of syphilis,” Boston Med. Surg. J., 131:246, 1894.

[11] S. Maberly-Smith, “On the treatment of puerperal convulsions by hypodermic injection of morphia,” Lancet, 118:86-87, 1881;  J. D. Eggleston, quoted in “The treatment of puerperal convulsions,” JAMA, 8:295-96, 1887, at 295.

[12] D. H. Cullumore, “Case of traumatic tetanus, treated with the hypodermic injection of atropia; amputation of great toe; recovery,” Lancet, 114:42-43, 1897.

[13] Hardman, “Treatment of choleraic diarrhea,” op. cit.; C. MacDowall, “Hypodermic injections of morphia in cholera,” Lancet, 116:636, 1880.

[14] On the “heroic surgery” of the final decades of the nineteenth century and the exalted status of late-nineteenth-century surgeons, see P. E. Stepansky, Freud, Surgery, and the Surgeons (Hillsdale, NJ: Analytic Press, 1999), pp. 23-34 and passim.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Will It Hurt?

“. . . the children’s population of this century has been submitted progressively as never before to the merciless routine of the ‘cold steel’ of the hypodermic needle.”  –Karl E. Kassowitz, “Psychodynamic Reactions of Children to the Use of Hypodermic Needles” (1958)

Of course, like so much medical technology, injection by hypodermic needle  has a prehistory dating back to the ancient Romans, who used metal syringes with disk plungers for enemas and nasal injections.  Seventeenth- and eighteenth-century physicians extended the sites of entry to the vagina and rectum, using syringes of metal, pewter, ivory, and wood.  Christopher Wren, the Oxford astronomer and architect, introduced intravenous injection in 1657, when he inserted a quill into the patient’s exposed vein and pumped in water, opium, or a purgative (laxative).

But, like so much medical technology, things only get interesting in the nineteenth century.  In the first half of the century, the prehistory of needle injection includes the  work of G. V. Lafargue, a French physician from the commune of St. Emilion.  He treated neuralgic (nerve) pain – his own included – by penetrating the skin with a vaccination lancet dipped in morphine and later by inserting solid morphine pellets under the skin through a large needle hole.  In 1844, the Irish physician Francis Rynd undertook injection by making a small incision in the skin and inserting a fine cannula (tube), letting gravity guide the medication to its intended site.[1]

The leap to a prototype of the modern syringe, in which a glass piston pushes medicine through a metal or glass barrel that ends in a hollow-pointed needle, occurred on two national fronts in 1853.  In Scotland, Alexander Wood,  secretary of Edinburgh’s Royal College of Physicians, injected morphine solution directly into his patients in the hope of dulling their neuralgias.  There was a minor innovation and a major one.  Wood used sherry wine as his solvent, believing it would prove less irritating to the skin than alcohol and less likely to rust his instrument than water.  And then the breakthrough:  He administered the liquid morphine through a piston-equipped syringe that ended in a pointed needle.  Near the end of the needle, on one side, was an  opening through which medicine could be released when an aperture on the outer tube was rotated into alignment with the opening.  It was designed and made by the London instrument maker Daniel Ferguson, whose “elegant little syringes,” as Wood described them, were intended to inject iron percholoride (a blood-clotting agent, or coagulant) into skin lesions and birthmarks in the hope of making them less unsightly.  It never occurred to him that his medicine-releasing, needle-pointed syringes could be used for subcutaneous injection as well.[2]

Across the channel in the French city of Lyon, the veterinary surgeon Charles Pravez employed a piston-driven syringe of his own making to inject iron percholoride into the blood vessels of sheep and horses.  Pravez was not interested in unsightly birthmarks; he was searching for an effective treatment for aneurysms (enlarged arteries, usually due to weakening of the arterial walls) that he thought could be extended to humans.  Wood was the first in print – his “New Method of Treating  Neuralgia by the Direct Application of Opiates to the Painful Points” appeared in the Edinburgh Medical & Surgical Journal in 1855[3] — and, shortly thereafter, he improved Ferguson’s design by devising a hollow needle that could simply be screwed onto the end of the syringe.  Unsurprisingly, then, he has received the lion’s share of credit for “inventing” the modern hypodermic syringe.  Pravez, after all, was only interested in determining whether iron percholoride would clot blood; he never administered medication through his syringe to animals or people.

Wood and followers like the New York physician Benjamin Fordyce Barker, who brought Wood’s technique to Bellevue Hospital in 1856, were convinced that the injected fluid had a local action on inflamed peripheral nerves.  Wood allowed for a secondary effect through absorption into the bloodstream, but believed the local action accounted for the injection’s rapid relief of pain.  It fell to the London surgeon Charles Hunter to stress that the systemic effect of injectable narcotic was primary.  It was not necessary, he argued in 1858, to inject liquid morphine into the most painful spot; the medicine provided the same relief when injected far from the site of the lesion.  It was Hunter, seeking to underscore the originality of his approach to injectable morphine, especially its general therapeutic effect, who introduced the term “hypodermic” from the Greek compound meaning “under the skin.”[4]

It took time for the needle to become integral to doctors and doctoring.  In America, physicians greeted the hypodermic injection with skepticism and even dread, despite the avowals of patients that injectable morphine provided them with instantaneous, well-nigh miraculous relief from chronic pain.[5]  The complicated, time-consuming process of preparing injectable solutions prior to the manufacture of dissolvable tablets in the 1880s didn’t help matters.  Nor did the trial-and-error process of arriving at something like appropriate doses of the solutions.  But most importantly, until the early twentieth century, very few drugs were injectable.  Through the 1870s, the physician’s injectable arsenal consisted of highly poisonous (in pure form) plant alkaloids such as morphine, atropine (belladonna), strychnine, and aconitine, and, by decade’s end, the vasodilator heart medicine nitroglycerine.  The development of local and regional anesthesia in the mid-1880s relied on the hypodermic syringe for subcutaneous injections of cocaine solution, but as late as 1905, only 20 of the 1,039 drugs in the U.S. Pharmacopoeia were injectable.[6]  The availability of injectable insulin in the early 1920s heralded a new, everyday reliance on hypodermic injections, and over the course of the century, the needle, along with the stethoscope, came to stand in for the physician.  Now, of course, needles and doctors “seem to go together,” with the former signifying “the power to heal through hurting” even as it “condenses the notions of active practitioner and passive patient.”[7]

The child’s fear of needles, always a part of pediatric practice, has generated a literature of its own.  In the mid-twentieth century, in the heyday of Freudianism, children’s needle anxiety gave rise to psychodynamic musings.  In 1958, Karl Kassowitz of Milwaukee Children’s Hospital made the stunningly commonsensical observation that younger children were immature and hence more anxious about receiving injections than older children.  By the time kids were eight or nine, he found, most had outgrown their fear.  Among the less than 30% who hadn’t, Kassowitz gravely counseled, continuing resistance to the needle might represent “a clue to an underlying neurosis.”[8]  Ah, the good old Freudian days.

In the second half of the last century, anxiety about receiving injections was “medicalized” like most everything else, and in the more enveloping guise of BII (blood, injection, injury) phobia, found its way into the fourth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual in 1994. Needle phobia thereupon became the beneficiary of all that accompanies medicalization – a specific etiology, physical symptoms, associated EKG and stress hormone changes, and strategies of management.  The latter are impressively varied and range across medical, educational, psychotherapeutic, behavioral, cognitive-behavioral, relaxation, and desensitizing approaches.[9]  Recent literature also highlights the vasovagal reflex associated with needle and blood phobia.  Patients confronted with the needle become so anxious that an initial increase in heart rate and blood pressure is followed by a marked drop,   as a result of which they become sweaty, dizzy, pallid, nauseous (any or all of the above), and sometimes faint (vasovagal syncope).  Another interesting finding is that needle phobia (especially in its BII variant) along with its associated vasovagal reflex probably have a genetic component, as there is a much higher concordance within families for BII phobia than other kinds of phobia. Researchers who study twins put the heritability of BII phobia at around 48%.[10]

Needle phobia is still prevalent among kids, to be sure, but it has long since matured into a fully grown-up condition. Surveys find injection phobia in anywhere from nine to 21% of the the general population and even higher percentages of select populations, such as U.S. college communities.[11]               A study by the Dental Fears Research Clinic of the University of                               Washington in 1995 found that over a quarter of surveyed students and university employees were fearful of dental injections, with 5% admitting they avoided or canceled dental appointments out of fear.[12]  Perhaps some of these needlephobes bear the scars of childhood trauma.  Pediatricians now urge control of the pain associated with venipuncture and intravenous cannulation (tube insertion) in infants, toddlers, and young children, since there is evidence such procedures can have a lasting impact on pain sensitivity and tolerance of needle picks.[13]

But people are not only afraid of needles; they also overvalue them and seek them out.  Needle phobia, whatever its hereditary contribution, is a creation of Western medicine.  The surveys cited above come from the U.S., Canada, and England.  Once we shift our gaze to developing countries of Asia and Africa we behold a different needle-strewn landscape.  Studies attest not only to the high acceptance of the needle but also to its integration into popular understandings of disease.  Lay people in countries such as Indonesia, Tanzania, and Uganda typically want injections; indeed, they often insist on them because injected medicines, which enter the bloodstream directly and (so they believe) remain in the body longer, must be more effective than orally injected pills or liquids.

The strength, rapid action, and body-wide circulation of injectable medicine – these things make injection the only cure for serious disease.[14]  So valued are needles and syringes in developing countries that most lay people, and even Registered Medical Practitioners in India and Nepal, consider it wasteful to discard disposable needles after only a single use.  And then there is the tendency of people in developing countries to rely on lay injectors (the “needle curers” of Uganda; the “injection doctors” of Thailand; the informal providers of India and Turkey) for their shots.  This has led to the indiscriminate use of  penicillin and other chemotherapeutic agents, often injected without attention to sterile procedure.  All of which contributes to the spread of infectious disease and presents a major headache for the World Health Organization.

The pain of the injection?  Bring it on.  In developing countries, the burning sensation that accompanies many injections signifies curative power.  In some cultures, people also welcome the pain as confirmation that real treatment has been given.[15]  In pain there is healing power.  It is the potent sting of modern science brought to bear on serious, often debilitating disease.  All of which suggests the contrasting worldviews and emotional tonalities collapsed into the fearful and hopeful question that frames this essay:  “Will it hurt?”


[1] On the prehistory of hypodermic injection, see D. L. Macht, “The history of intravenous and subcutaneous administration of drugs,” JAMA, 55:856-60, 1916; G. A. Mogey, “Centenary of Hypodermic Injection,” BMJ, 2:1180-85, 1953; N. Howard-Jones, “A critical study of the origins and early development of hypodermic medication,” J. Hist. Med., 2:201-49, 1947 and N. Howard-Jones, “The origins of hypodermic medication,” Scien. Amer., 224:96-102, 1971.

[2] J. B. Blake, “Mr. Ferguson’s hypodermic syringe,” J. Hist. Med., 15: 337-41, 1960.

[3] A. Wood, “New method of treating neuralgia by the direct application of opiates to the painful points,” Edinb. Med. Surg. J., 82:265-81, 1855.

[4] On Hunter’s contribution and his subsequent vitriolic exchanges with Wood over priority, see Howard-Jones, “Critical Study of Development of Hypodermic Medication,” op cit.  Patricia Rosales provides a contextually grounded discussion of the dispute and the committee investigation of Edinburgh’s Royal Medical and Chirurgical Society to which it gave rise.  See P. A. Rosales, A History of the Hypodermic Syringe, 1850s-1920s.  Unpublished doctoral dissertation, Department of the History of Science, Harvard University, 1997, pp. 21-30.

[5] See Rosales, History of Hypodermic Syringe, op. cit., chap. 3, on the early reception of hypodermic injections in America.

[6] G. Lawrence, “The hypodermic syringe,” Lancet, 359:1074, 2002; J. Calatayud & A. Gonsález, “History of the development and evolution of local anesthesia since the coca leaf,” Anesthesiology, 98:1503-08, 2003, at p. 1506; R. E. Kravetz, “Hypodermic syringe,” Am. J. Gastroenterol., 100:2614-15, 2005.

[7] A. Kotwal, “Innovation, diffusion and safety of a medical technology: a review of the literature on injection practices,”  Soc. Sci. Med., 60:1133-47, 2005, at p. 1133.

[8] Kassowitz, “Psychodynamic reactions of children to hypodermic needles,”  op. cit., quoted at p. 257.

[9] Summaries of the various treatment approaches to needle phobia are given in J. G. Hamilton, “Needle phobia:  a neglected diagnosis,” J. Fam. Prac., 41:169-75 ,1995  and H. Willemsen, et al., “Needle phobia in children:  a discussion of aetiology and treatment options, ”Clin. Child Psychol. Psychiatry, 7:609-19, 2002.

[10] Hamilton, “Needle phobia,” op. cit.; S. Torgersen, “The nature and origin of common phobic fears,” Brit. J. Psychiatry, 134:343-51, 1979; L-G. Ost, et al., “Applied tension, exposure in vivo, and tension-only in the treatment of blood phobia,” Behav. Res. Ther., 29:561-74, 1991;  L-G. Ost, “Blood and injection phobia: background and cognitive, physiological, and behavioral variables,” J. Abnorm. Psychol., 101:68-74, 1992.

[11] References to these surveys are provided by Hamilton, “Needle phobia,” op. cit.

[12] On the University of Washington survey, see P. Milgrom, et al., “Four dimensions of fear of dental injections,” J. Am. Dental Assn., 128:756-66, 1997 and T. Kaakko, et al., “Dental fear among university students: implications for pharmacological research,” Anesth. Prog., 45:62-67, 1998.  Lawrence Prouix reported the results of the survey in The Washington Post under the heading “Who’s afraid of the big bad needle?” July 1, 1997, p. 5.

[13] R. M. Kennedy, et al., “Clinical implications of unmanaged need-insertion pain and distress in children,” Pediatrics, 122:S130-S133, 2008.

[14] See Kotwal, “Innovation, diffusion and safety of a medical technology,” op. cit., p. 1136 for references.

[15] S. R. Whyte & S. van der Geest, “Injections: issues and methods for anthropological research,” in N. L. Etkin & M. L. Tan, eds., Medicine, Meanings and Contexts (Quezon City, Philippines: Health Action Information Network, 1994), pp. 137-8.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Your Tool Touches Me

It is little known that René Laënnec, the Parisian physician who invented the stethoscope at the Necker Hospital in 1816, found it distasteful to place his ear to the patient’s chest.  The distastefulness of “direct auscultation” was compounded by its impracticality in the hospital where, he observed, “it was scarcely to be suggested for most women patients, in some of whom the size of the breasts also posed a physical obstacle.”[1]  The stethoscope, which permitted “mediate auscultation,” not only amplified heart and lung sounds in diagnostically transformative ways; it enabled Laënnec to avoid repugnant  ear to chest contact.

Many women patients of Laënnec’s time and place did not see it that way.  Accustomed to the warmly human pressure of ear on chest, they were uncomfortable when an elongated wooden cylinder was interposed between the two.  By the closing decades of the nineteenth century, of course, the situation was inverted:  The stethoscope, in its modern binaural guise, had become so integral to physical examination that patients  hardly viewed it as a tool at all.  It had become emblematic of hands-on doctoring and, as such, a sensory extender of the doctor.  Even now, the stethoscope virtually stands in for the doctor, especially the generalist or the cardiologist, so that a retiring physician will announce that he is, or will be characterized by others as, hanging up his stethoscope.[2]

It’s easy to argue for the “oneness” of the physician and his or her instruments when it’s a matter of simple tools that amplify sensory endowment  (stethoscopes), provide a hands-on bodily “reading” (of temperature or blood pressure), or elicit a tendon reflex (e.g., the reflex hammer).  And the argument can be extended without much difficulty to the more invasive, high-tech “scopes” used by medical specialists to see what is invisible to the naked eye.  Instruments become so wedded to one or another specialty that it is hard to think of our providers without them.  What is an ophthalmologist without her ophthalmoscope?  An ENT without his nasal speculum?  A gynecologist without her vaginal speculum?  An internist without his blood pressure meter?  Such hand-held devices are diagnostic enablers, and as such they are, or at least ought to be, our friends.

In “Caring Technology” I  suggested that even large-scale technology administered by technicians, and therefore outside the physician’s literal grasp, can be linked in meaningful ways to the physician’s person.  A caring explanation of the need for this or that study, informed by a relational bond, can humanize even the most forbidding high-tech machinery.  To be sure, medical machinery, whatever the discomfort and/or bodily bombardment it entails, is often discomfiting.  But it need be alienating only when we come to it in an alienated state, when it is not an instrument of physicianly engagement but a dehumanized object – a piece of technology.

Critical care nurses, whose work is both technology-laden and technology-driven, have had much to say on the relationship of technology to nursing identity and nursing care.  This literature includes provocative contributions that look at where nurses stand in a hospital hierarchy that comprises staff physicians, residents, students, administrators, patients, and patients’ families.

For some CCU nurses, the use of technology and the acquisition of technological competence segue into issues of power and autonomy and they, in turn, are linked to issues of gender, medical domination, and “ownership” of the technology.[3]  A less feminist sensibility informs interview research that yields unsurprising empirical findings, viz.,  that comfort with technology and the ability to incorporate it into a caring, “touching” disposition hinge on the technological mastery associated with nursing experience.  Student and novice nurses, for example, find the machinery of the CCU anxiety-inducing, even overwhelming.  They resent the casual manner in which physicians relegate to them complex technological tasks, such as weaning patients from respirators, without appreciating the long list of  nursing duties to which such tasks are appended.[4]  Withal, beginners approach the use of technology in task-specific ways and have great difficulty “caring with technology.”[5]   Theirs is not a caring technology but a technology that causes stress and jeopardizes fragile professional identities.

Experienced CCU nurses, on the other hand, achieve a technological competence that lets them pull the machinery to them; they use it as a window of opportunity for being with their patients.[6]   Following Christine Little, we can give the transformation from novice to expert a phenomenological gloss and say that as technological inexperience gives way to technological mastery, technological skills become “ready-to-hand” (Heidegger) and “a natural extension of practice.”[7]

Well and good.  We want critical care nurses comfortable with the machinery of critical care – with cardiac and vital signs monitors, respirators, catheters, and infusion pumps – so that implementing technological interventions and monitoring the monitors do not blot out the nurse’s “presence”  in the patient’s care.   But all this is from the perspective of the nurse and her role in the hospital.  What, one wonders, does the patient make of all this technology?

Humanizing technology means identifying with it in ways that are not only responsive to the patient’s fears but also conducive to a shared appreciation of its role in treatment.  It is easier for patients to feel humanly touched by technology, that is, if their doctors and nurses appropriate it and represent it as an extender of care.  Perhaps some doctors and nurses do so as a matter of course, but one searches the literature in vain for examples of nurse-patient or doctor-patient interactions that humanize technology through dialogue.  And such dialogue, however perfunctory in nature, may greatly matter.

Consider the seriously ill patient whose nurse interacts with him without consideration of the technology-saturated environment in which care is given.  Now consider the seriously ill patient whose nurse incorporates the machinery into his or her caregiving identity, as in “This monitor [or this line or this pump] is a terrific thing for you and for me.  It lets me take better care of you.”  Such reassurance, which can be elaborated in any number of patient-centered ways, is not trivial; it may turn an anxious patient around, psychologically speaking.  And it is all the more important when, owing to the gravity of the patient’s condition, the nurse must spend more time assessing data and tending to machinery than caring for the patient.  Here especially the patient needs to be reminded that the nurse’s responsibility for machinery expands his or her role as the patient’s guardian.[8]

The touch of the physician’s sensory extenders, if literally uncomfortable, may still be comforting.  For it is the physician’s own ears that hear us through the stethoscope and whose own eyes gaze on us through the ophthalmoscope, the laryngoscope, the esophagoscope, the colposcope.  It is easier to appreciate tools as beneficent extenders of care in the safe confines of one’s own doctor’s office, where instrumental touching is fortified by the relational bond that grows out of continuing care.  In the hospital, absent such relational grounding, there is  more room for dissonance and hence more need for shared values and empathy.  A nurse who lets the cardiac monitor pull her away from patient care will not do well with a frightened patient who needs personal caring.  A parturient who welcomes the technology of the labor room will connect better with a labor nurse who values the electronic fetal monitor (and the reassuring visualization it provides the soon-to-be mother) than a nurse who is unhappy with its employment in low-risk births and prefers a return to intermittent auscultation.

In the best of circumstances, tools elicit an intersubjective convergence grounded in an expectation of objectively superior care.  It helps to keep the “objective care” part in mind, to remember that technology was not devised to frighten us, encumber us, or cause us pain,  but to help doctors and nurses evaluate us, keep us stable and comfortable, and enable treatments that will make us better, or at least leave us better off than our technology-free forebears.

My retinologist reclines the examination chair all the way back and begins prepping my left eye for its second intravitreal  injection of Eylea, one of the newest drugs used to treat macular disease.  I am grateful for all the technology that has brought me to this point:  the retinal camera, the slit lamp, the optical coherence tomography machine.  I am especially grateful for the development of fluoroscan angiography, which allows my doctor to pinpoint with great precision the lesion in need of treatment.  And of course I am grateful to my retinologist, who brings all this technology to bear with a human touch, calmly reassuring me through every step of evaluation and treatment.

I experienced almost immediate improvement after the first such injection a month earlier and am eager to proceed with the treatment.  So I am relatively relaxed as he douses my eye with antiseptic and anesthetic washes in preparation for the needle.  Then, at the point of injection, he asks me to look up at the face of his assistant, a young woman with a lovely smile.  “My pleasure,” I quip, slipping into gendered mode.  “I love to look at pretty faces.”   I am barely aware of the momentary pressure of the needle that punctures my eyeball and releases this wonderfully effective new drug into the back of my eye.  It is not the needle that administers treatment but my trusted and caring physician.  “Great technique,” I remark.  “I barely felt it.”  To which his young assistant, still standing above me, smiles and adds,  “I think I had something to do with it.”  And indeed she had.


[1] Quoted in J. Duffin, To See with a Better Eye: A Life of R. T. H. Laennec (Princeton: Princeton University Press, 1998), p. 122.

[2] Here are a few recent examples:  O. Samuel, “On hanging up my stethoscope,” BMJ, 312:1426, 1996; “Dr. Van Ausdal hangs up his stethoscope,” YSNews.com, September 26, 2013 (http://ysnews.com/news/2013/09/dr-van-ausdal-hangs-up-his-stethoscope);  “At 90, Gardena doctor is hanging up his stethoscope,” The Daily Breeze, October, 29, 2013 (http://www.dailybreeze.com/general-news/20131029/at-90-gardena-doctor-is-hanging-up-his-stethoscope);  “Well-known doctor hangs up his stethoscope,” Bay Post, February 8, 2014 (http://www.batemansbaypost.com.au/story/1849567/well-known-doctor-hangs-up-his-stethoscope)

[3] See, for example, A. Barnard, “A critical review of the belief that technology is a neutral object and nurses are its master,” J. Advanced Nurs., 26:126-131, 1997; J. Fairman & P. D’Antonio, “Virtual power: gendering the nurse-technology relationship,” Nurs. Inq., 6:178-186, 1999; & B. J. Hoerst & J. Fairman, “Social and professional influences of the technology of electronic fetal monitoring on obstetrical nursing,” Western J. Nurs. Res., 22:475-491, 2000, at pp. 481-82.

[4] C. Crocker & S. Timmons, “The role of technology in critical care nursing,” J. Advanced Nurs., 65:52-61, 2008.

[5] M. McGrath, “The challenges of caring in a technological environment:  critical care nurses’ experiences,” J. Clin. Nurs., 17:1096-1104, 2008.

[6] A. Bernardo, “Technology and true presence in nursing,” Holistic Nurs. Prac., 12:40-49, 1998;  R. C. Locsin,  Technological Competency As Caring in Nursing: A Model For Practice (Indianapolis: Centre Nursing Press, 2005);  McGrath, “The challenges of caring,” op. cit.

[7] C. V. Little, “Technological competence as a fundamental structure of learning in critical care nursing: a phenomenological study,” J. Clin. Nurs., 9:391-399, 2000, at pp. 398, 396.

[8] See E. A. McConnell, “The impact of machines on the work of critical care nurses,” Crit. Care Nurs. Q., 12:45-52, 1990, at p. 51; D. Pelletier , et al., “The impact of the technological care environment on the nursing role, Int. J. Tech. Assess. Health Care, 12:35     8-366, 1996.C

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

You Touch Me

Etymologically, the word “touch” (from the old French touchier) is a semantic cornucopia.  In English, of course, common usage embraces dual meanings. We make tactile contact, and we receive emotional contact.  The latter meaning is usually passively rendered, in the manner of receiving a gift:  we are the beneficiary of someone else’s emotional offering; we are “touched” by a person’s words, gestures, or deeds.  The duality extends to the realm of healthcare:  as patients, we are touched physically by our physicians (or other providers) but, if we are fortunate, we are also touched emotionally by their kindness, concern, empathy, even love.  Here the two kinds of touching are complementary.  We are examined (and often experience a measure of  contact comfort through the touch)  and then comforted by the physician’s sympathetic words; we are touched by the human contact that follows from physical touch.

For nurses, caregiving as touching and being touched has been central to professional identity.  The foundations of nursing as a modern “profession” were laid down on the battlefields of Crimea and the American South during the mid-nineteenth century.  Crimean and Civil War nurses could not “treat” their patients, but they “touched” them literally and figuratively and, in so doing, individualized their suffering.  Their nursing touch was amplified by the caring impulse of mothers:  they listened to soldiers’ stories, sought to keep them warm, and especially sought to nourish them, struggling to pry their food parcels away from corrupt medical officers.  In the process, they formulated a professional ethos that, in privileging patient care over hospital protocol, was anathema to the professionalism associated with male medical authority.[1]

This alternative, comfort-based vision of professionalism is one reason, among others, that nursing literature is more nuanced than medical literature in exploring the phenomenology and dynamic meanings of touch. It has fallen to nursing researchers to isolate and appraise the tactile components of touch (such as duration, location, intensity, and sensation) and also to differentiate between comforting touch and the touch associated with procedures, i.e., procedural touch.[2]  Buttressing the  phenomenological viewpoint of Husserl and Merleau-Ponty with recent neurophysiologic research, Catherine Green has recently argued that nurse-patient interaction, with its “heavily tactile component” promotes an experiential oneness:  it “plunges the nurse into the patient situation in a direct and immediate way.”  To touch, she reminds us, is simultaneously to be touched, so that the nurse’s soothing touch not only promotes deep empathy of the patient’s plight but actually “constitutes” the nurse herself (or himself) in her (or his) very personhood.[3]  Other nurse researchers question the intersubjective convergence presumed by Green’s rendering.  A survey of hospitalized patients, for example, documents that some patients are ambivalent toward the nurse’s touch, since for them it signifies not only care but also control.[4]

After World War II, the rise of sophisticated monitoring equipment in hospitals pulled American nursing away from hands-on, one-on-one bedside nursing.  By the 1960s, hospital nurses, no less than physicians, were “proceduralists” who relied on cardiac and vital function monitors, electronic fetal monitors, and the like for “data” on the patients they “nursed.”  They monitored the monitors and, for educators critical of this turn of events, especially psychiatric nurses, had become little more than monitors themselves.

As the historian Margarete Sandelowski has elaborated, this transformation of hospital nursing had both an upside and a downside.  It elevated the status of nurses by aligning them with postwar scientific medicine in all its burgeoning technological power.  Nurses, the skilled human monitors of the machines, were key players on whom hospitalized patients and their physicians increasingly relied.  In the hospital setting, they became “middle managers,”[5] with command authority of their wards. Those nurses with specialized skills – especially those who worked in the newly established intensive care units (ICUs) – were at the top of the nursing pecking order.  They were the most medical of the nurses, trained to diagnose and treat life-threating conditions as they arose.  As such, they achieved a new collegial status with physicians, the limits of which were all too clear.  Yes, physicians relied on nurses (and often learned from them) in the use of the new machines, but they simultaneously demeaned the “practical knowledge” that nurses acquired in the service of advanced technology – as if educating and reassuring patients about the purpose of the machines; maintaining them (and recommending improvements to manufacturers); and utilizing them without medical supervision was something any minimally intelligent person could do.

A special predicament of nursing concerns the impact of monitoring and proceduralism on a profession whose historical raison d’être was hands-on caring, first on the battlefields and then at the bedside.  Self-evidently, nurses with advanced procedural skills had to relinquish that most traditional of nursing functions: the laying on of hands.  Consider hospital-based nurses who worked full-time as x-ray technicians and microscopists in the early 1900s; who, beginning in the 1930s, monitored  polio patients in their iron lungs; who, in the decades following World War II, performed venipuncture as full-time IV therapists; and who, beginning in the 1960s, diagnosed and treated life-threatening conditions in the machine-driven ICUs.  Obstetrical nurses who, beginning in the late 1960s, relied on electronic fetal monitors to gauge the progress of labor and who, on detecting “nonreassuring” fetal heart rate patterns, initiated oxygen therapy or terminated oxytocin infusions – these “modern” OB nurses were worlds removed from their pre-1940s forebears, who monitored labor with their hands and eyes in the patient’s own home.  Nursing educators grew concerned that, with the growing reliance on electronic metering, nurses were “literally and figuratively ‘losing touch’ with laboring women.”[6]

Nor did the dilemma for nurses end with the pull of machine-age monitoring away from what nursing educators long construed as “true nursing.”  It pertained equally to the compensatory efforts to restore the personal touch to nursing in the 1970s and 80s.  This is because “true nursing,” as understood by Florence Nightingale and several generations of twentieth-century nursing educators, fell back on gendered touching; to nurse truly and well was to deploy the feminine touch of caring women.  If “losing touch” through technology was the price paid for elevated status in the hospital, then restoring touch brought with it the re-gendering (and hence devaluing) of the nurse’s charge:  she was, when all was said and done, the womanly helpmate of physicians, those masculine (or masculinized) gatekeepers of scientific medicine in all its curative glory.[7]  And yet, in the matter of touching and being touched, gender takes us only so far.  What then of male nurses, who insist on the synergy of masculinity, caring, and touch?[8]  Is their touch ipso facto deficient in some essential ingredient of true nursing?

As soon as we enter the realm of soothing touch, with its attendant psychological meanings, we encounter a number of binaries.  Each pole of a binary is a construct, an example of what the sociologist Max Weber termed an “ideal type.”  The question-promoting, if not questionable, nature of these constructs only increases their heuristic value.  They give us something to think about.  So we have “feminine” and “masculine” touch, as noted above.  But we also have the nurse’s touch and, at the other pole, the physician’s touch.  In the gendered world of many feminist writers, this binary replicates the gender divide, despite the historical and contemporary reality of women physicians and male nurses.

But the binary extends  to women physicians themselves.  In their efforts to gain entry to the world of male American medicine,  female medical pioneers adopted two radically different strategies.  At one pole, we have the touch-comfort-sympathy approach of Elizabeth Blackwell, which assigned women their own  feminized domain of practice (child care, nonsurgical obstetrics and gynecology, womanly counseling on matters of sanitation, hygiene, and prevention).  At the opposite pole we have the research-oriented, scientific approach of Mary Putnam Jacobi and Marie Zakrezewska, which held that  women physicians must be physicians in any and all respects.  Only with state-of-the-art training in the medical science (e.g., bacteriology) and treatments (e.g., ovariotomy) of the day, they held, would women docs achieve what they deserved:  full parity with  medical men.  The binary of female physicians as extenders of women’s “natural sphere” versus female physicians as physicians pure and simple runs through the second half of the nineteenth century.[9]

Within medicine, we can perhaps speak of the generalist touch (analogous to the generalist gaze[10]) that can be juxtaposed with the specialist touch.  Medical technology, especially tools that amplify the physician’s senses –  invite another binary.  There is the pole of direct touch and the pole of touch mediated by instrumentation.  This binary spans the divide between “direct auscultation,” with the physician’s ear on the patient’s chest, and “mediate auscultation,” with the stethoscope linking (and, for some nineteenth-century patients, coming between) the physician’s ear and the patient’s chest).

Broader than any of the foregoing is the binary that pushes beyond the framework of comfort care per se.  Consider it a meta-binary.  At one pole is therapeutic touch (TT), whose premise of a preternatural human energy field subject to disturbance and hands-on (or hands-near) remediation is nothing if not a recrudescence of Anton Mesmer’s “vital magnetism” of the late 18th century, with the TT therapist (usually a nurse) taking the role of Mesmer’s magnétiseur.[11]  At the opposite pole is transgressive touch.  This is the pole of boundary violations typically, though not invariably, associated with touch-free specialties such as psychiatry and psychoanalysis.[12]  Transgressive touch signifies inappropriately intimate, usually sexualized, touch that violates the boundaries of professional caring and results in the patient’s dis-comfort and dis-ease, sometimes to the point of leaving the patient traumatized, i.e., “touched in the head.”  It also signifies the psychological impairment of the therapist, who, in another etymologically just sense of the term, may be “touched,” given his or her gross inability to maintain a professional treatment relationship.

These binaries invite further scrutiny, less on account of the extremes than of the shades of grayness that span each  continuum.  Exploration of touch is a messy business, a hands-on business, a psycho-physical business.  It may yield important insights but perhaps only fitfully, in the manner of – to invoke a meaning that arose in the early nineteenth century – touch and go.


[1] See J. E. Schultz, “The inhospitable hospital: gender and professionalism in civil war medicine,” Signs, 17:363-392, 1992.

[2]  S. J. Weiss, “The language of touch,” Nurs. Res., 28:76-80, 1979; S. J. Weiss, “Psychophysiological effects of caregiver touch on incidence of cardiac dysrhythmia,” Heart and Lung, 15:494-505, 1986; C. A. Estabrooks, “Touch in nursing practice: a historical perspective: 1900-1920,” J. Nursing Hist., 2:33-49, 1987; J. S. Mulaik, et al., “Patients’ perceptions of nurses’ use of touch,” W. J. Nursing Res., 13:306-323, 1991.

[3] C. Green, “Philosophic reflections on the meaning of touch in nurse-patient interactions,” Nurs. Phil., 14:242-253, 2013; quoted at pp. 250-251.

[4] Mulaik, “Patient’s perceptions of nurses’ use of touch,” pp. 317-318.

[5] “Middle managers” is the characterization of the nursing historian Barbara Melosh, in “Doctors, patients, and ‘big nurse’: work and gender in the postwar hospital,” in E. C. Lagemann, ed., Nursing History: New Perspective, New Possibilities (NY: Teachers College Press, 1983), pp. 157-179.  

[6] M. Sandelowski, Devices and Desires:  Gender, Technology, and American Nursing (Chapel Hill: University of North Carolina Press, 2000), p. 166.

[7] On the revalorization of the feminine in nursing in the Nursing Theory Movement of the 70s and 80s, see Sandelowski, Devices and Desires, pp. 131-134.

[8] See R. L. Pullen, et al., “Men, caring, & touch,”  Men in Nursing, 7:14-17, 2009.

[9] The work of Regina Morantz-Sanchez is especially illuminating of this binary and the major protagonists at the two poles.  See R. Morantz, “Feminism, professionalism, and germs: the thought of Mary Putnam Jacobi and Elizabeth Blackwell,” American Quarterly, 34:459-478, 1982, with a slightly revised version of the paper in R. Morantz-Sanchez, Sympathy and Science: Women Physicians in American Medicine (Chapel Hill: University of North Carolina Press, 2000 [1985]), pp. 184-202.

[10] I have written about the “generalist gaze” in P. E. Stepansky, The Last Family Doctor:  Remembering my Father’s Medicine (Montclair, NJ: Keynote Books, 2011), pp. 62-66, and more recently in P. E. Stepansky, “When generalist values meant general practice: family medicine in post-WWII America” (precirculated paper, American Association for the History of Medicine, Atlanta, GA, May 16-19, 2013).

[11] Therapeutic touch was devised and promulgated by the nursing educator Delores Krieger in publications of the 1970s and 80s, e.g., “Therapeutic touch:  the imprimatur of nursing,” Amer. J. Nursing, 75:785-787, 1975; The Therapeutic Touch (NY: Prentice Hall, 1985); and Living the Therapeutic Touch (NY:  Dodd, Mead, 1987).  I share the viewpoint of Therese Meehan, who sees the technique as a risk-free nursing intervention capable of potentiating a powerful placebo effect (T. C. Meehan, “Therapeutic touch as a nursing intervention,” J. Advanced Nursing, 1:117-125, 1998).

[12] For a fairly recent examination of transgressive touch and its ramifications, see G. O. Gabbard & E. P. Lester, Boundary Violations in Psychoanalysis (Arlington, VA: Amer. Psychiatric Pub., 2002). 

Copyright © 2013 by Paul E. Stepansky.  All rights reserved.

The Times They Are a-Changin’: Trends in Medical Education

Medical educators certainly have their differences, but one still discerns an emerging consensus about the kind of changes that will improve healthcare delivery and simultaneously re-humanize physician-patient encounters.  Here are a few of the most progressive trends in medical education, along with brief glosses that serve to recapitulate certain themes of previous postings.

Contemporary medical training stresses the importance of teamwork and militates against the traditional narcissistic investment in solo expertise.  Teamwork, which relies on the contributions of nonphysician midlevel providers, works against the legacy of socialization that, for many generations, rendered physicians “unfit” for teamwork.  The trend now is to re-vision training so that the physician becomes fit for a new kind of collaborative endeavor.  It is teamwork, when all is said and done, that “transfers the bulk of our work from the realm of guesswork and conjecture to one in which certainty and exactitude may be at least approached.”  Must group practice militate against personalized care?  Perhaps not. Recently, medical groups large and small have been enjoined to remember that “a considerable proportion of the physician’s work is not the practice of medicine at all.  It consists of counseling, orienting, extricating, encouraging, solacing, sympathizing, understanding.”

Contemporary medical training understands that the patient him- or herself has become, and by rights ought to be, a member of the healthcare team.  Medical educators ceded long ago that patients, in their own best interests, “should know something about the human body.”  Now we have more concrete expressions of this requirement, viz., that  if more adequate teaching of anatomy and physiology were provided in secondary schools, “physicians will profit and patients will prosper.”   “Just because a man is ill,” notes one educator, “is no reason why he should stop using his mind,” especially as he [i.e., the patient] is the important factor in the solution of his problem, not the doctor.”  For many educators the knowledgeable patient is not only a member of the “team,” but the physician’s bonafide collaborator.  They assume, that is, that physician and patient “will be able to work together intelligently.”  Working together intelligently suggests a “frank cooperation” in which physician and patient alike have “free access to all outside sources of help and expert knowledge.”  It also means recognizing, without prejudice or personal affront,  that the patient’s “inalienable right is to consult as many physicians as he chooses.”  Even today, an educator observes, “doctors have too much property interest in their patients,” despite the fact that patients find their pronouncements something less than, shall we say, “oracular.”  Contemporary training inherits the mantle of the patient rights revolution of the 1970s and 80s.  Educators today recognize that “It is the patient who must decide the validity of opinion from consideration of its source and probability.”  Another speaks for many in reiterating that

It is the patient who must decide the validity of opinion from consideration of its source and probability.  If the doctor’s opinion does not seem reasonable, or if the bias of it, due to temperament or personal and professional experience is obvious, then it is well for the patient to get another opinion, and the doctor has no right to be incensed or humiliated by such action.

Contemporary medical training stresses the importance of primary care values that are lineal descendants of old-style general practice.  This trend grows out of the realization that a physician “can take care of a patient without caring for him,” that the man or woman publicly considered a “good doctor” is invariably the doctor who will “find something in a sick person that aroused his sympathy, excited his admiration, or moved his compassion.”  Optimally, commentators suggest,  multispecialty and subspecialty groups would retain their own patient-centered generalists – call them, perhaps, “therapeutists”  — to provide integrative patient care beyond diagnostic problem-solving and even beyond the conventional treatment modalities of the group.  The group-based therapeutist, while trained in the root specialty of his colleagues, would also have specialized knowledge of alternative treatments outside the specialty.  He would, for example, supplement familiarity with mainstream drug therapies with a whole-patient, one might say a “wholesome” distrust of drugs.

Contemporary training finally recognizes the importance of first-hand experience of illness in inculcating the values that make for “good doctoring.”  Indeed, innovative curricula now land medical students in the emergency rooms and clinics with (feigned) symptoms and histories that invite discomfiting and sometimes lengthy interventions.  Why has it taken educators so long to enlarge the curriculum in this humanizing manner?  If, as one educator notes, “It is too much to ask of a physician that he himself should have had an enigmatic illness,” it should still be a guiding heuristic that “any illness makes him a better doctor.”  Another adds:  “It is said that an ill doctor is a pathetic sight; but one who has been ill and has recovered has had an affective experience which he can utilize to the advantage of his patients.”

The affective side of a personal illness experience may entail first-hand experience of medicine’s dehumanizing “hidden curriculum.”  Fortunate the patient whose physician has undergone his or her own medical odyssey, so that life experience vivifies the commonplace reported by one seriously ill provider:  “ I felt I had not been treated like a human being.”  A physician-writer who experienced obscure, long-term infectious illness early in his career and was shunted from consultant to consultant understands far better than healthy colleagues that physicians “are so prone to occupy themselves with the theoretical requirements of a case that they lose sight entirely of the human being and his life story.”  Here is the painful reminiscence of another ill physician of more literary bent:

There had been no inquiry of plans or prospects, no solicitude for ambitious or desires, no interest in the spirit of the man whose engine was signaling for gas and oil.  That day I determined never to sentence a person on sight, for life or to death.

Contemporary medical training increasingly recognizes that all medicine is, to one degree or another, psychiatric medicine.  Clinical opinions, educators remind us, can be truthful but still contoured to the personality, especially the psychological needs, of the patient.  Sad to say, the best clinical educators are those who know colleagues, whatever their specialty, who either “do not appreciate that constituent of personality which psychologists call the affects . . . and the importance of the role which these affects or emotions play in conditioning [the patient’s] destiny, well or ill, or they refuse to be taught by observation and experience.”   This realization segues into the role of psychiatric training in medical education, certainly for physicians engaged in primary care, but really for all physicians.  Among other things, such training “would teach him [or her] that disease cannot be standardized, that the individual must be considered first, then the disease.”  Even among patients with typical illnesses, psychiatric training can help physicians understand idiosyncratic reactions to standard treatment protocols.  It aids comprehension  of the individual “who happens to have a very common disease in his own very personal manner.”

_____________

These trends encapsulate the reflections and recommendations of progressive medical educators responsive to the public demand for more humane and humanizing physicians. The trends are also responsive to the mounting burnout of physicians – especially primary care physicians – who, in the  cost-conscious, productivity-driven, and regulatory climate of our time, find it harder than ever to practice patient-centered medicine.  But are these trends really so contemporary?  I confess to a deception.  The foregoing paraphrases, quotations, and recommendations are not from contemporary educators at all.  They are culled from the popular essays of a single physician, the pioneer neurologist Joseph Collins, all of which were published in Harper’s Monthly between 1924 and 1929.[1]

Collins is a fascinating figure.  An 1888 graduate of New York University Medical College, he attended medical school and began his practice burdened with serious, sometimes debilitating, pulmonary and abdominal symptoms that had him run the gauntlet of consultant diagnoses – pneumonia, pulmonary tuberculosis, “tuberculosis of the kidney,” chronic appendicitis, even brain tumor.  None of these authoritative pronouncements was on the mark, but taken together they left Collins highly critical of his own profession and pushed him in the direction of holistic, collaborative, patient-centered medicine.  After an extended period of general practice, he segued into the emerging specialty of neurology (then termed neuropsychiatry) and, with his colleagues Joseph Fraenkel and Pearce Bailey, founded the New York Neurological Institute in 1909.  Collins’s career as a neurologist never dislodged his commitment to  generalist patient-centered care. Indeed, the neurologist, as he understood the specialty in 1911, was the generalist best suited to treat chronic disease of any sort.[2]

Collin’s colorful, multifaceted career as a popular medical writer and literary critic is beyond the scope of this essay.[3]  I use him here to circle back to a cardinal point of previous writings.  “Patient-centered/relationship-centered care,” humanistic medicine, empathic caregiving, behavioral adjustments to the reality of patients’ rights  – these additives to the medical school curriculum are as old as they are new.  What is new is the relatively recent effort to cultivate such sensibilities through curricular innovations.  Taken together,  public health, preventive medicine, childhood vaccination, and modern antibiotic therapy have (mercifully) cut short the kind of  experiential journey that for Collins secured the humanistic moorings of the biomedical imperative.  Now medical educators rely on communication skills training, empathy-promoting protocols, core-skills workshops, and seminars on “The Healer’s Art” to close the circle, rescue medical students from evidence-based and protocol-driven overkill, and bring them back in line with Collins’s hard-won precepts.

It is not quite right to observe that these precepts apply equally to Collins’s time and our own.  They give expression to the care-giving impulse, to the ancient injunction to cure through caring (the Latin curare) that, in all its ebb and flow, whether as figure or ground, weaves through the fabric of medical history writ large.  Listen to Collins one final time as he expounds his philosophy of practice in 1926:

It would be a wise thing to devote a part of medical education to the mind of the physician himself, especially as it concerns his patients.  For the glories of medical history are the humanized physicians.  Science will always fall short; but compassion covereth all.[4]


[1] Joseph Collins, “The alienist in court,” Harper’s Monthly, 150:280-286, 1924; Joseph Collins, “A doctor looks at doctors,” Harper’s Monthly, 154:348-356, 1926; Joseph Collins, “Should doctors tell the truth?”, Harper’s Monthly, 155:320-326, 1927;  Joseph Collins, “Group practice in medicine,” Harper’s Monthly, 158:165-173, 1928;  Joseph Collins, “The patient’s dilemma,” Harper’s Monthly, 159:505-514, 1929.   I have also consulted two of Collins’s popular collections that make many of the same points:  Letters to a Neurologist, 2nd series (NY: Wood, 1910) and The Way with the Nerves: Letters to a Neurologist on Various Modern Nervous Ailments, Real and Fancied, with Replies Thereto Telling of their Nature and Treatment (NY: Putnam, 1911).

[2] Collins, The Way with Nerves, p. 268.

[3] Collins’s review of James Joyce’s Ulysses, the first by an American, was published  in The New York Times on May 28, 1922.  His volume The Doctor Looks at Literature: Psychological Studies of Life and Literature (NY: Doran, 1923) appeared the following year.

[4] Collins, “A doctor looks at doctors,” p. 356.  Collins’s injunction is exemplified in “The Healer’s Art,” a course developed by Dr. Rachel Naomi Remen over the past 22 years and currently taught annually in 71  American medical colleges as well as medical colleges in seven other countries.  See David Bornstein, “Medicine’s Search for Meaning,” posted for The New York Times/Opinionator on September 18, 2013 (http://opinionator.blogs.nytimes.com/2013/09/18/medicines-search-for-meaning/?_r=0).

Copyright © 2013 by Paul E. Stepansky.  All rights reserved.