Category Archives: Psychiatry in Medicine

Psychotropic Serendipities

Serendipities abound in history of medicine, in our own time no less than in the past.  In the 15 years that followed the end of World War II, a period of special interest to me, the discovery of what we now consider modern psychiatric (or psychotropic) drugs is a striking case in point.

Researchers in the final years of the war and immediately thereafter were hardly looking for psychotropics.  They were looking for other things:  improved antihistamines; preservatives that would permit penicillin to hold up during transport to troops in Europe and Asia; and the development of antibiotics effective against penicillin-resistant microorganisms like the tubercle bacilli that caused tuberculosis.

When Frank Berger, a Czechoslovakian bacteriologist, fled to England in 1939, he gained work as as a refugee camp physician.  Then, in 1943, he was hired by a government laboratory in London and joined in the work that engaged so many British scientists of the time:  the purification and industrial production of penicillin.  Berger’s particular assignment was the search for a penicillin preservative; he was especially interested in finding an agent that would prevent the breakdown of penicillin by gram-negative bacteria (penicillinase) during shipment.  And with the synthesis of mephenesin in 1945, he achieved success – and then some.  Mephenesin not only preserved penicillin, but, in small-scale animal trials on toxicity begun at the end of 1951, it revealed something else:  On injection into mice, rats, and guinea pigs, the preservative produced deep muscle relaxation, a sleep-like state that Berger described in 1946 as “tranquillization.”[1]

Berger emigrated to the United States in 1947, and after a brief stint at the University of Rochester Medical School, became Director of Laboratories at Carter-Wallace in Cranbury, New Jersey.  There, joined by the chemist Bernard Ludwig, he developed a more potent and slowly metabolizing form of mephenesin.  The drug was meprobamate, the first minor tranquilizer, for which a patent was finally granted in 1955.  Released by Carter-Wallace as Miltown and by Wyeth (a licensee) as Equanil, it took the American market by storm.  In 1956, it leaped from less than 1% to nearly 70% of new tranquilizer prescriptions; in1957 more than 35 million prescriptions were sold, the equivalent of one per second. Meprobamate single-handedly transformed American medicine by transmuting the everyday stresses and strains of Everyman (and Everywoman) into pharmacologically treatable anxiety.  For general practitioners in particular it was a godsend.  “If generalists could not psychoanalyze their troubled patients,” the historian David Herzberg has observed, “they could at least ease worries with a pill, possibly preventing a minor condition from worsening into serious mental illness.”[2]  Not bad for a penicillin preservative.

In 1952, at the very time Berger was observing the “tranquillization” of small rodents injected with meprobamate,  Henri-Marie Laborit, a French naval surgeon working at the Val de Grâce military hospital outside Paris, published his first article on the usefulness of chlorpromazine (CPZ), a chlorinated form of the antihistamine Promazine, in surgical practice.  Laborit, who was working on the development of “artificial hibernation” as an anesthetic technique, found that the drug not only calmed surgical patients prior to the administration of anesthesia, but also prevented them from lapsing into shock during and after their operations.  The drug had been synthesized by the Rhône-Poulenc chemist Paul Charpentier at the end of 1951. Charpentier was searching for an improved antihistamine, but he quickly saw the drug’s possible usefulness as a potentiator of general anesthesia,[3] which indeed it proved to be.

Impressed with the drug’s effectiveness (in combination with other drugs as a “lytic cocktail”) in inducing relaxation – what he termed “euphoric quietude” – and in preventing shock, Laborit encouraged his colleague Joseph Hamon to try it on psychiatric patients.  It was subsequently taken up by the French psychiatrists Jean Delay and Pierre Deniker, who tried it on psychiatric patients at the Sainte-Anne mental hospital in Paris.  In six journal articles published in the spring and summer of 1952, they reported encouraging results, characterizing their patients’ slowing down of motor activity and emotional indifference as “neuroleptic syndrome” (from the Greek “that take the nerve”).  Thus was born, in retrospect, the first major tranquilizer, a drug far more effective than its predecessors (including morphine and scopolamine in combination) in controlling extreme agitation and relieving psychotic delusions and hallucinations.[4]

But only in retrospect.  At the time of the preliminary trials, the primary application of chlorpromazine remained unclear.  Rhône-Poulenc conducted clinical trials for a number of applications of the drug: to induce “hibernation” during surgery; as an anesthetic; as an antinausea drug (antiemetic) for seasickness; as a treatment for, respectively, burns, stress, infections, obesity,  Parkinson’s disease, and epilepsy.  When Smith, Kline, & French became the American licensee of the drug in early 1953, it planned to market it to American surgeons and psychiatrists alike, and it also took pains to license the drug as an antiemetic.  Only at the end of 1953 did it recognize the primary psychiatric use of the drug, which it released in May, 1954 as Thorazine.

Of course, the birth of modern antibiotic therapy begins with penicillin – the first of the wartime “miracle drugs.” And a miracle drug it was, with an antibacterial spectrum that encompassed strep and staph infections, pneumonia, syphilis and gonorrhea.  But the foregoing infections were all caused by gram-positive bacteria.  Penicillin did not touch the kind of gram-negative bacteria that caused tuberculosis.

The first wonder drug effective against TB was streptomycin, an actinomyces (a soil-dwelling, anaerobic bacteria) discovered by Salman Waksman and his doctoral student Albert Schatz at the Rutgers Agricultural Experiment Station in 1943.  Eight years later, in 1951, chemists working at Bayer Laboratories in Wuppertal, Germany, at Hoffman-La Roche in Basel, Switzerland, and at the Squibb Institute for Medical Research in New Brunswick, New Jersey simultaneously discovered a drug that was not only more effective in treating TB than streptomycin; it was also easier to administer and less likely to have serious side effects.  It was isoniazid, the final wonder drug in the war against TB.  In combination with streptomycin, it was more effective than either drug alone and  less likely to elicit treatment-resistant strains of the tubercle bacilli.

But here’s the thing:  A side-effect of isoniazid was its mood-elevating (or, in the lingo of the day, “psycho-stimulative”) effect.  Researchers conducting trials at Baylor University, the University of Minnesota, and Spain’s University of Cordoba reached the same conclusion:  The mood-elevating effect of isoniazid treatment among TB patients pointed to psychiatry as the primary site of its use.  Back in New York, Nathan Kline, an assistant professor of psychiatry at Columbia’s College of Physicians and Surgeons, learned about the “psycho-stimulative” effect of isoniazid from a report about animal experiments conducted at the Warner-Lambert Research Laboratories in Morris Plains, New Jersey.  Shortly thereafter, he began his own trial of isoniazid with patients at Rockland State Hospital in Orangeburg, New York, and published a report of his findings in 1957.

A year later the drug was brought to market as an antitubercular agent (Marsilid), even though it had been given to over 400,000 depressed patients by that time.  Its improved successor drug, iproniazid, was withdrawn from the U.S. market in 1961 owing to an alleged linkage to jaundice and kidney damage.  But isoniazid retains its place of honor among psychotropic serendipities:  It was the first of the monoamine oxidase inhibitors (MAOIs), potent antidepressants of which contemporary formulations (Marplan, Nardil) are used to treat atypical depressions, i.e., depressions refractory to newer and more benign antidepressants like the omnipresent SSRIs.[5]

Nathan Kline was likewise at hand to steer another ostensibly nonpsychotropic drug into psychiatric usage.  In 1952, reports of Rauwolfia serpentine, a plant root used in  India for hypertension (high blood pressure), snakebite, and “insanity,” reached the West and led to interest in the root’s potential as an antihypertensive.  A year later, chemists at the New Jersey headquarters of the Swiss pharmaceutical firm Ciba (later Ciba-Geigy and now Novartis) isolated an active salt, reserpine, from the root, and Kline, ever ready with the mental patients at Rockland State Hospital, obtained a sample to try on the hospital’s depressed patients.

Kline’s results were encouraging.  In short order, he was touting  reserpine as an “effective sedative for use in mental hospitals,” a finding reaffirmed later that year at a symposium at Ciba’s American headquarters in Summit, New Jersey, where staff pharmacologist F. F. Yonkman first used the term “tranquilizer” to characterize the drug’s mixture of sedation and well-being.[6]  As a major tranquilizer, reserpine never caught on like chlorpromazine, even though, throughout the 1950s, it “was far more frequently mentioned in the scientific literature than chlorpromazine.”[7]

So there we have it: the birth of modern psychopharmacology in the postwar era from research into penicillin preservatives, antihistamines, antibiotics, and antihypertensives.  Of course, serendipities operate in both directions:  drugs initially released as psychotropics sometimes fail miserably, only to find their worth outside of psychiatry.  We need only remember the history of thalidomide, released by the German firm Chemie Grűnenthal in 1957 as a sedative effective in treating anxiety, tension states, insomnia, and nausea.  This psychotropic found its initial market among pregnant women who took the drug to relieve first-trimester morning sickness.  Unbeknown to the drug manufacturer, the drug crossed the placental barrier and, tragically, compromised the pregnancies of many of these women.  Users of thalidomide delivered grossly deformed infants with truncated limbs, “flipper” babies, around 10,000 in all in Europe and Japan.  Only 40% of these infants survived.

This sad episode is well-known among historians, as is the heroic resistance of the FDA’s Frances Kelsey, who in 1960 fought off pressure from FDA administrators and executives at Richardson-Merrell, the American distributor, to release the drug in the U.S.  Less well known, perhaps, is the relicensing of the drug by the the FDA in 1998 (as Thalomid) for a totally nonpsychotropic usage: the treatment of certain complications of leprosy.  Prescribed off-label, it also proved helpful in treating AIDS wasting syndrome.  And beginning in the 2000s, it was used in combination with another drug, dexamethasone, to treat multiple myeloma (a cancer of the plasma cells). It received FDA approval as an anticancer agent in 2006.[8]

Seen thusly, serendipities are often rescue operations, the retrieving and reevaluating of long-abandoned industrial chemicals and of medications deemed inadequate for their intended purpose.  Small wonder that Librium, the first of the benzodiazepine class of minor tranquilizers, the drug that replaced meprobamate as the GP’s drug of choice in 1960, began its life as a new dye (benzheptoxidiazine) synthetized by the German chemists K. von Auwers and F. von Meyenburg in 1891. In the 1930s the Polish-American chemist Leo Sternbach returned to the chemical and synthesized related compounds in the continuing search for new “dyestuffs.”  Then, 20 years later, Sternbach, now a chemist at Hoffmann-La Roche in Nutley, New Jersey, returned to these compounds one final time to see if any of them might have psychiatric applications.  He found nothing of promise, but seven years later, in 1957, a coworker undertook a spring cleaning of the lab and found a variant that Sternbach had missed.  It turned out to be Librium.[9]  All hail to the resourceful minds that return to the dyes of yesteryear in search of the psychotropics of tomorrow – and to those who clean their labs with eyes wide open.


[1] F. M. Berger & W. Bradley, “The pharmacological properties of α:β dihdroxy (2-methylphenoxy)-γ- propane (Myanesin),” Brit. J. Pharmacol. Chemother., 1:265-272, 1946, at p. 265.

[2] D. Herzberg, Happy Pills in America: From Miltown to Prozac (Baltimore: Johns Hopkins, 2009), p. 35.  Cf. A. Tone, The Age of Anxiety: A History of America’s Turbulent Affair with Tranquilizers  (NY:  Basic Books, 2009), pp. 90-91.

[3] P. Charpentier, et al., “Recherches sur les diméthylaminopropyl –N phénothiazines substituées,” Comptes Rendus de l’Académie des Sciences, 235:59-60, 1952.

[4] On the discovery and early uses of chlorpromazine, see D. Healy, The Creation of Psychopharmacology (Cambridge: Harvard, 2002), pp. 77-101; F. Lopez-Munoz, et al., “History of the discovery and clinical introduction of chlorpromazine,”  Ann. Clin. Psychiat., 17:113-135, 2005; and T. A. Ban, “Fifty years chlorpromazine:  a historical perspective,” Neuropsychiat. Dis. & Treat., 3:495-500, 2007.

[5] On the development and marketing of isoniazid,  see H. F. Dowling, Fighting Infection: Conquests of the Twentieth Century (Cambridge: Harvard, 1977), p. 168; F. Ryan, The Forgotten Plague: How the Battle Against Tuberculosis was Won and Lost (Boston:  Little, Brown, 1992), p. 363; F. López-Munoz, et al., “On the clinical introduction of monoamine oxidase inhibitors, tricyclics, and tetracyclics. Part II: tricyclics and tetracyclics,” J. Clinical Psychopharm., 28:1-4, 2008; and Tone, Age of Anxiety, pp. 128-29.

[6] E. S. Valenstein, Blaming the Brain: The Truth about Drugs and Mental Health  (NY:  Free Press, 1998), p. 69; D. Healy, The Antidepressant Era (Cambridge: Harvard, 1997),  pp. 59-70;  D. Healy, Creation of Psychopharmacology, pp. 103-05.

[7] Healy, Creation of Psychopharmacology, p. 106.

[8] P. J. Hilts provides a readable overview of the thalidomide crisis in Protecting America’s Health:  The FDA, Business, and One Hundred Years of Regulation (NY:  Knopf, 2003), pp. 144-65.  On the subsequent relicensing of thalidomide for the treatment of leprosy in 1998 and its extensive off-label use, see S. Timmermans & M. Berg, The Gold Standard:  The Challenge of Evidence-Based Medicine and Standardization in Health Care. (Phila: Temple University Press, 2003), pp. 188-92.

[9] On the discovery of Librium, see Valenstein, Blaming the Brain, pp. 54-56; A. Tone,“Listening to the past: history, psychiatry, and anxiety,” Canad. J. Psychiat,, 50:373-380, 2005, at p. 377; and Tone, Age of Anxiety, pp. 126-40.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Will It Hurt?

“. . . the children’s population of this century has been submitted progressively as never before to the merciless routine of the ‘cold steel’ of the hypodermic needle.”  —Karl E. Kassowitz, “Psychodynamic Reactions of Children to the Use of Hypodermic Needles” (1958)

Of course, like so much medical technology, injection by hypodermic needle  has a prehistory dating back to the ancient Romans, who used metal syringes with disk plungers for enemas and nasal injections.  Seventeenth- and eighteenth-century physicians extended the sites of entry to the vagina and rectum, using syringes of metal, pewter, ivory, and wood.  Christopher Wren, the Oxford astronomer and architect, introduced intravenous injection in 1657, when he inserted a quill into the patient’s exposed vein and pumped in water, opium, or a purgative (laxative).

But, like so much medical technology, things only get interesting in the nineteenth century.  In the first half of the century, the prehistory of needle injection includes the  work of G. V. Lafargue, a French physician from the commune of St. Emilion.  He treated neuralgic (nerve) pain – his own included – by penetrating the skin with a vaccination lancet dipped in morphine and later by inserting solid morphine pellets under the skin through a large needle hole.  In 1844, the Irish physician Francis Rynd undertook injection by making a small incision in the skin and inserting a fine cannula (tube), letting gravity guide the medication to its intended site.[1]

The leap to a prototype of the modern syringe, in which a glass piston pushes medicine through a metal or glass barrel that ends in a hollow-pointed needle, occurred on two national fronts in 1853.  In Scotland, Alexander Wood,  secretary of Edinburgh’s Royal College of Physicians, injected morphine solution directly into his patients in the hope of dulling their neuralgias.  There was a minor innovation and a major one.  Wood used sherry wine as his solvent, believing it would prove less irritating to the skin than alcohol and less likely to rust his instrument than water.  And then the breakthrough:  He administered the liquid morphine through a piston-equipped syringe that ended in a pointed needle.  Near the end of the needle, on one side, was an  opening through which medicine could be released when an aperture on the outer tube was rotated into alignment with the opening.  It was designed and made by the London instrument maker Daniel Ferguson, whose “elegant little syringes,” as Wood described them, were intended to inject iron percholoride (a blood-clotting agent, or coagulant) into skin lesions and birthmarks in the hope of making them less unsightly.  It never occurred to him that his medicine-releasing, needle-pointed syringes could be used for subcutaneous injection as well.[2]

Across the channel in the French city of Lyon, the veterinary surgeon Charles Pravez employed a piston-driven syringe of his own making to inject iron percholoride into the blood vessels of sheep and horses.  Pravez was not interested in unsightly birthmarks; he was searching for an effective treatment for aneurysms (enlarged arteries, usually due to weakening of the arterial walls) that he thought could be extended to humans.  Wood was the first in print – his “New Method of Treating  Neuralgia by the Direct Application of Opiates to the Painful Points” appeared in the Edinburgh Medical & Surgical Journal in 1855[3] — and, shortly thereafter, he improved Ferguson’s design by devising a hollow needle that could simply be screwed onto the end of the syringe.  Unsurprisingly, then, he has received the lion’s share of credit for “inventing” the modern hypodermic syringe.  Pravez, after all, was only interested in determining whether iron percholoride would clot blood; he never administered medication through his syringe to animals or people.

Wood and followers like the New York physician Benjamin Fordyce Barker, who brought Wood’s technique to Bellevue Hospital in 1856, were convinced that the injected fluid had a local action on inflamed peripheral nerves.  Wood allowed for a secondary effect through absorption into the bloodstream, but believed the local action accounted for the injection’s rapid relief of pain.  It fell to the London surgeon Charles Hunter to stress that the systemic effect of injectable narcotic was primary.  It was not necessary, he argued in 1858, to inject liquid morphine into the most painful spot; the medicine provided the same relief when injected far from the site of the lesion.  It was Hunter, seeking to underscore the originality of his approach to injectable morphine, especially its general therapeutic effect, who introduced the term “hypodermic” from the Greek compound meaning “under the skin.”[4]

It took time for the needle to become integral to doctors and doctoring.  In America, physicians greeted the hypodermic injection with skepticism and even dread, despite the avowals of patients that injectable morphine provided them with instantaneous, well-nigh miraculous relief from chronic pain.[5]  The complicated, time-consuming process of preparing injectable solutions prior to the manufacture of dissolvable tablets in the 1880s didn’t help matters.  Nor did the trial-and-error process of arriving at something like appropriate doses of the solutions.  But most importantly, until the early twentieth century, very few drugs were injectable.  Through the 1870s, the physician’s injectable arsenal consisted of highly poisonous (in pure form) plant alkaloids such as morphine, atropine (belladonna), strychnine, and aconitine, and, by decade’s end, the vasodilator heart medicine nitroglycerine.  The development of local and regional anesthesia in the mid-1880s relied on the hypodermic syringe for subcutaneous injections of cocaine solution, but as late as 1905, only 20 of the 1,039 drugs in the U.S. Pharmacopoeia were injectable.[6]  The availability of injectable insulin in the early 1920s heralded a new, everyday reliance on hypodermic injections, and over the course of the century, the needle, along with the stethoscope, came to stand in for the physician.  Now, of course, needles and doctors “seem to go together,” with the former signifying “the power to heal through hurting” even as it “condenses the notions of active practitioner and passive patient.”[7]

The child’s fear of needles, always a part of pediatric practice, has generated a literature of its own.  In the mid-twentieth century, in the heyday of Freudianism, children’s needle anxiety gave rise to psychodynamic musings.  In 1958, Karl Kassowitz of Milwaukee Children’s Hospital made the stunningly commonsensical observation that younger children were immature and hence more anxious about receiving injections than older children.  By the time kids were eight or nine, he found, most had outgrown their fear.  Among the less than 30% who hadn’t, Kassowitz gravely counseled, continuing resistance to the needle might represent “a clue to an underlying neurosis.”[8]  Ah, the good old Freudian days.

In the second half of the last century, anxiety about receiving injections was “medicalized” like most everything else, and in the more enveloping guise of BII (blood, injection, injury) phobia, found its way into the fourth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual in 1994. Needle phobia thereupon became the beneficiary of all that accompanies medicalization – a specific etiology, physical symptoms, associated EKG and stress hormone changes, and strategies of management.  The latter are impressively varied and range across medical, educational, psychotherapeutic, behavioral, cognitive-behavioral, relaxation, and desensitizing approaches.[9]  Recent literature also highlights the vasovagal reflex associated with needle and blood phobia.  Patients confronted with the needle become so anxious that an initial increase in heart rate and blood pressure is followed by a marked drop,   as a result of which they become sweaty, dizzy, pallid, nauseous (any or all of the above), and sometimes faint (vasovagal syncope).  Another interesting finding is that needle phobia (especially in its BII variant) along with its associated vasovagal reflex probably have a genetic component, as there is a much higher concordance within families for BII phobia than other kinds of phobia. Researchers who study twins put the heritability of BII phobia at around 48%.[10]

Needle phobia is still prevalent among kids, to be sure, but it has long since matured into a fully grown-up condition. Surveys find injection phobia in anywhere from nine to 21% of the the general population and even higher percentages of select populations, such as U.S. college communities.[11]               A study by the Dental Fears Research Clinic of the University of                               Washington in 1995 found that over a quarter of surveyed students and university employees were fearful of dental injections, with 5% admitting they avoided or canceled dental appointments out of fear.[12]  Perhaps some of these needlephobes bear the scars of childhood trauma.  Pediatricians now urge control of the pain associated with venipuncture and intravenous cannulation (tube insertion) in infants, toddlers, and young children, since there is evidence such procedures can have a lasting impact on pain sensitivity and tolerance of needle picks.[13]

But people are not only afraid of needles; they also overvalue them and seek them out.  Needle phobia, whatever its hereditary contribution, is a creation of Western medicine.  The surveys cited above come from the U.S., Canada, and England.  Once we shift our gaze to developing countries of Asia and Africa we behold a different needle-strewn landscape.  Studies attest not only to the high acceptance of the needle but also to its integration into popular understandings of disease.  Lay people in countries such as Indonesia, Tanzania, and Uganda typically want injections; indeed, they often insist on them because injected medicines, which enter the bloodstream directly and (so they believe) remain in the body longer, must be more effective than orally injected pills or liquids.

The strength, rapid action, and body-wide circulation of injectable medicine – these things make injection the only cure for serious disease.[14]  So valued are needles and syringes in developing countries that most lay people, and even Registered Medical Practitioners in India and Nepal, consider it wasteful to discard disposable needles after only a single use.  And then there is the tendency of people in developing countries to rely on lay injectors (the “needle curers” of Uganda; the “injection doctors” of Thailand; the informal providers of India and Turkey) for their shots.  This has led to the indiscriminate use of  penicillin and other chemotherapeutic agents, often injected without attention to sterile procedure.  All of which contributes to the spread of infectious disease and presents a major headache for the World Health Organization.

The pain of the injection?  Bring it on.  In developing countries, the burning sensation that accompanies many injections signifies curative power.  In some cultures, people also welcome the pain as confirmation that real treatment has been given.[15]  In pain there is healing power.  It is the potent sting of modern science brought to bear on serious, often debilitating disease.  All of which suggests the contrasting worldviews and emotional tonalities collapsed into the fearful and hopeful question that frames this essay:  “Will it hurt?”

[1] On the prehistory of hypodermic injection, see D. L. Macht, “The history of intravenous and subcutaneous administration of drugs,” JAMA, 55:856-60, 1916; G. A. Mogey, “Centenary of Hypodermic Injection,” BMJ, 2:1180-85, 1953; N. Howard-Jones, “A critical study of the origins and early development of hypodermic medication,” J. Hist. Med., 2:201-49, 1947 and N. Howard-Jones, “The origins of hypodermic medication,” Scien. Amer., 224:96-102, 1971.

[2] J. B. Blake, “Mr. Ferguson’s hypodermic syringe,” J. Hist. Med., 15: 337-41, 1960.

[3] A. Wood, “New method of treating neuralgia by the direct application of opiates to the painful points,” Edinb. Med. Surg. J., 82:265-81, 1855.

[4] On Hunter’s contribution and his subsequent vitriolic exchanges with Wood over priority, see Howard-Jones, “Critical Study of Development of Hypodermic Medication,” op cit.  Patricia Rosales provides a contextually grounded discussion of the dispute and the committee investigation of Edinburgh’s Royal Medical and Chirurgical Society to which it gave rise.  See P. A. Rosales, A History of the Hypodermic Syringe, 1850s-1920s.  Unpublished doctoral dissertation, Department of the History of Science, Harvard University, 1997, pp. 21-30.

[5] See Rosales, History of Hypodermic Syringe, op. cit., chap. 3, on the early reception of hypodermic injections in America.

[6] G. Lawrence, “The hypodermic syringe,” Lancet, 359:1074, 2002; J. Calatayud & A. Gonsález, “History of the development and evolution of local anesthesia since the coca leaf,” Anesthesiology, 98:1503-08, 2003, at p. 1506; R. E. Kravetz, “Hypodermic syringe,” Am. J. Gastroenterol., 100:2614-15, 2005.

[7] A. Kotwal, “Innovation, diffusion and safety of a medical technology: a review of the literature on injection practices,”  Soc. Sci. Med., 60:1133-47, 2005, at p. 1133.

[8] Kassowitz, “Psychodynamic reactions of children to hypodermic needles,”  op. cit., quoted at p. 257.

[9] Summaries of the various treatment approaches to needle phobia are given in J. G. Hamilton, “Needle phobia:  a neglected diagnosis,” J. Fam. Prac., 41:169-75 ,1995  and H. Willemsen, et al., “Needle phobia in children:  a discussion of aetiology and treatment options, ”Clin. Child Psychol. Psychiatry, 7:609-19, 2002.

[10] Hamilton, “Needle phobia,” op. cit.; S. Torgersen, “The nature and origin of common phobic fears,” Brit. J. Psychiatry, 134:343-51, 1979; L-G. Ost, et al., “Applied tension, exposure in vivo, and tension-only in the treatment of blood phobia,” Behav. Res. Ther., 29:561-74, 1991;  L-G. Ost, “Blood and injection phobia: background and cognitive, physiological, and behavioral variables,” J. Abnorm. Psychol., 101:68-74, 1992.

[11] References to these surveys are provided by Hamilton, “Needle phobia,” op. cit.

[12] On the University of Washington survey, see P. Milgrom, et al., “Four dimensions of fear of dental injections,” J. Am. Dental Assn., 128:756-66, 1997 and T. Kaakko, et al., “Dental fear among university students: implications for pharmacological research,” Anesth. Prog., 45:62-67, 1998.  Lawrence Prouix reported the results of the survey in The Washington Post under the heading “Who’s afraid of the big bad needle?” July 1, 1997, p. 5.

[13] R. M. Kennedy, et al., “Clinical implications of unmanaged need-insertion pain and distress in children,” Pediatrics, 122:S130-S133, 2008.

[14] See Kotwal, “Innovation, diffusion and safety of a medical technology,” op. cit., p. 1136 for references.

[15] S. R. Whyte & S. van der Geest, “Injections: issues and methods for anthropological research,” in N. L. Etkin & M. L. Tan, eds., Medicine, Meanings and Contexts (Quezon City, Philippines: Health Action Information Network, 1994), pp. 137-8.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

The Times They Are a-Changin’: Trends in Medical Education

Medical educators certainly have their differences, but one still discerns an emerging consensus about the kind of changes that will improve healthcare delivery and simultaneously re-humanize physician-patient encounters.  Here are a few of the most progressive trends in medical education, along with brief glosses that serve to recapitulate certain themes of previous postings.

Contemporary medical training stresses the importance of teamwork and militates against the traditional narcissistic investment in solo expertise.  Teamwork, which relies on the contributions of nonphysician midlevel providers, works against the legacy of socialization that, for many generations, rendered physicians “unfit” for teamwork.  The trend now is to re-vision training so that the physician becomes fit for a new kind of collaborative endeavor.  It is teamwork, when all is said and done, that “transfers the bulk of our work from the realm of guesswork and conjecture to one in which certainty and exactitude may be at least approached.”  Must group practice militate against personalized care?  Perhaps not. Recently, medical groups large and small have been enjoined to remember that “a considerable proportion of the physician’s work is not the practice of medicine at all.  It consists of counseling, orienting, extricating, encouraging, solacing, sympathizing, understanding.”

Contemporary medical training understands that the patient him- or herself has become, and by rights ought to be, a member of the healthcare team.  Medical educators ceded long ago that patients, in their own best interests, “should know something about the human body.”  Now we have more concrete expressions of this requirement, viz., that  if more adequate teaching of anatomy and physiology were provided in secondary schools, “physicians will profit and patients will prosper.”   “Just because a man is ill,” notes one educator, “is no reason why he should stop using his mind,” especially as he [i.e., the patient] is the important factor in the solution of his problem, not the doctor.”  For many educators the knowledgeable patient is not only a member of the “team,” but the physician’s bonafide collaborator.  They assume, that is, that physician and patient “will be able to work together intelligently.”  Working together intelligently suggests a “frank cooperation” in which physician and patient alike have “free access to all outside sources of help and expert knowledge.”  It also means recognizing, without prejudice or personal affront,  that the patient’s “inalienable right is to consult as many physicians as he chooses.”  Even today, an educator observes, “doctors have too much property interest in their patients,” despite the fact that patients find their pronouncements something less than, shall we say, “oracular.”  Contemporary training inherits the mantle of the patient rights revolution of the 1970s and 80s.  Educators today recognize that “It is the patient who must decide the validity of opinion from consideration of its source and probability.”  Another speaks for many in reiterating that

It is the patient who must decide the validity of opinion from consideration of its source and probability.  If the doctor’s opinion does not seem reasonable, or if the bias of it, due to temperament or personal and professional experience is obvious, then it is well for the patient to get another opinion, and the doctor has no right to be incensed or humiliated by such action.

Contemporary medical training stresses the importance of primary care values that are lineal descendants of old-style general practice.  This trend grows out of the realization that a physician “can take care of a patient without caring for him,” that the man or woman publicly considered a “good doctor” is invariably the doctor who will “find something in a sick person that aroused his sympathy, excited his admiration, or moved his compassion.”  Optimally, commentators suggest,  multispecialty and subspecialty groups would retain their own patient-centered generalists – call them, perhaps, “therapeutists”  — to provide integrative patient care beyond diagnostic problem-solving and even beyond the conventional treatment modalities of the group.  The group-based therapeutist, while trained in the root specialty of his colleagues, would also have specialized knowledge of alternative treatments outside the specialty.  He would, for example, supplement familiarity with mainstream drug therapies with a whole-patient, one might say a “wholesome” distrust of drugs.

Contemporary training finally recognizes the importance of first-hand experience of illness in inculcating the values that make for “good doctoring.”  Indeed, innovative curricula now land medical students in the emergency rooms and clinics with (feigned) symptoms and histories that invite discomfiting and sometimes lengthy interventions.  Why has it taken educators so long to enlarge the curriculum in this humanizing manner?  If, as one educator notes, “It is too much to ask of a physician that he himself should have had an enigmatic illness,” it should still be a guiding heuristic that “any illness makes him a better doctor.”  Another adds:  “It is said that an ill doctor is a pathetic sight; but one who has been ill and has recovered has had an affective experience which he can utilize to the advantage of his patients.”

The affective side of a personal illness experience may entail first-hand experience of medicine’s dehumanizing “hidden curriculum.”  Fortunate the patient whose physician has undergone his or her own medical odyssey, so that life experience vivifies the commonplace reported by one seriously ill provider:  “ I felt I had not been treated like a human being.”  A physician-writer who experienced obscure, long-term infectious illness early in his career and was shunted from consultant to consultant understands far better than healthy colleagues that physicians “are so prone to occupy themselves with the theoretical requirements of a case that they lose sight entirely of the human being and his life story.”  Here is the painful reminiscence of another ill physician of more literary bent:

There had been no inquiry of plans or prospects, no solicitude for ambitious or desires, no interest in the spirit of the man whose engine was signaling for gas and oil.  That day I determined never to sentence a person on sight, for life or to death.

Contemporary medical training increasingly recognizes that all medicine is, to one degree or another, psychiatric medicine.  Clinical opinions, educators remind us, can be truthful but still contoured to the personality, especially the psychological needs, of the patient.  Sad to say, the best clinical educators are those who know colleagues, whatever their specialty, who either “do not appreciate that constituent of personality which psychologists call the affects . . . and the importance of the role which these affects or emotions play in conditioning [the patient’s] destiny, well or ill, or they refuse to be taught by observation and experience.”   This realization segues into the role of psychiatric training in medical education, certainly for physicians engaged in primary care, but really for all physicians.  Among other things, such training “would teach him [or her] that disease cannot be standardized, that the individual must be considered first, then the disease.”  Even among patients with typical illnesses, psychiatric training can help physicians understand idiosyncratic reactions to standard treatment protocols.  It aids comprehension  of the individual “who happens to have a very common disease in his own very personal manner.”


These trends encapsulate the reflections and recommendations of progressive medical educators responsive to the public demand for more humane and humanizing physicians. The trends are also responsive to the mounting burnout of physicians – especially primary care physicians – who, in the  cost-conscious, productivity-driven, and regulatory climate of our time, find it harder than ever to practice patient-centered medicine.  But are these trends really so contemporary?  I confess to a deception.  The foregoing paraphrases, quotations, and recommendations are not from contemporary educators at all.  They are culled from the popular essays of a single physician, the pioneer neurologist Joseph Collins, all of which were published in Harper’s Monthly between 1924 and 1929.[1]

Collins is a fascinating figure.  An 1888 graduate of New York University Medical College, he attended medical school and began his practice burdened with serious, sometimes debilitating, pulmonary and abdominal symptoms that had him run the gauntlet of consultant diagnoses – pneumonia, pulmonary tuberculosis, “tuberculosis of the kidney,” chronic appendicitis, even brain tumor.  None of these authoritative pronouncements was on the mark, but taken together they left Collins highly critical of his own profession and pushed him in the direction of holistic, collaborative, patient-centered medicine.  After an extended period of general practice, he segued into the emerging specialty of neurology (then termed neuropsychiatry) and, with his colleagues Joseph Fraenkel and Pearce Bailey, founded the New York Neurological Institute in 1909.  Collins’s career as a neurologist never dislodged his commitment to  generalist patient-centered care. Indeed, the neurologist, as he understood the specialty in 1911, was the generalist best suited to treat chronic disease of any sort.[2]

Collin’s colorful, multifaceted career as a popular medical writer and literary critic is beyond the scope of this essay.[3]  I use him here to circle back to a cardinal point of previous writings.  “Patient-centered/relationship-centered care,” humanistic medicine, empathic caregiving, behavioral adjustments to the reality of patients’ rights  – these additives to the medical school curriculum are as old as they are new.  What is new is the relatively recent effort to cultivate such sensibilities through curricular innovations.  Taken together,  public health, preventive medicine, childhood vaccination, and modern antibiotic therapy have (mercifully) cut short the kind of  experiential journey that for Collins secured the humanistic moorings of the biomedical imperative.  Now medical educators rely on communication skills training, empathy-promoting protocols, core-skills workshops, and seminars on “The Healer’s Art” to close the circle, rescue medical students from evidence-based and protocol-driven overkill, and bring them back in line with Collins’s hard-won precepts.

It is not quite right to observe that these precepts apply equally to Collins’s time and our own.  They give expression to the care-giving impulse, to the ancient injunction to cure through caring (the Latin curare) that, in all its ebb and flow, whether as figure or ground, weaves through the fabric of medical history writ large.  Listen to Collins one final time as he expounds his philosophy of practice in 1926:

It would be a wise thing to devote a part of medical education to the mind of the physician himself, especially as it concerns his patients.  For the glories of medical history are the humanized physicians.  Science will always fall short; but compassion covereth all.[4]

[1] Joseph Collins, “The alienist in court,” Harper’s Monthly, 150:280-286, 1924; Joseph Collins, “A doctor looks at doctors,” Harper’s Monthly, 154:348-356, 1926; Joseph Collins, “Should doctors tell the truth?”, Harper’s Monthly, 155:320-326, 1927;  Joseph Collins, “Group practice in medicine,” Harper’s Monthly, 158:165-173, 1928;  Joseph Collins, “The patient’s dilemma,” Harper’s Monthly, 159:505-514, 1929.   I have also consulted two of Collins’s popular collections that make many of the same points:  Letters to a Neurologist, 2nd series (NY: Wood, 1910) and The Way with the Nerves: Letters to a Neurologist on Various Modern Nervous Ailments, Real and Fancied, with Replies Thereto Telling of their Nature and Treatment (NY: Putnam, 1911).

[2] Collins, The Way with Nerves, p. 268.

[3] Collins’s review of James Joyce’s Ulysses, the first by an American, was published  in The New York Times on May 28, 1922.  His volume The Doctor Looks at Literature: Psychological Studies of Life and Literature (NY: Doran, 1923) appeared the following year.

[4] Collins, “A doctor looks at doctors,” p. 356.  Collins’s injunction is exemplified in “The Healer’s Art,” a course developed by Dr. Rachel Naomi Remen over the past 22 years and currently taught annually in 71  American medical colleges as well as medical colleges in seven other countries.  See David Bornstein, “Medicine’s Search for Meaning,” posted for The New York Times/Opinionator on September 18, 2013 (

Copyright © 2013 by Paul E. Stepansky.  All rights reserved.

Primary Care/Primarily Caring (IV)

If it is little known in medical circles that World War II “made” American psychiatry, it is even less well known that the war made psychiatry an integral part of general medicine in the postwar decades.  Under the leadership of the psychoanalyst (and as of the war, Brigadier General) William Menninger, Director of Neuropsychiatry in the Office of the Surgeon General, psychoanalytic psychiatry guided the armed forces in tending to soldiers who succumbed to combat fatigue, aka war neuroses, and getting some 60% of them back to their units in record time.   But it did so less because of the relatively small number of trained psychiatrists available to the armed forces than through the efforts of the General Medical Officers (GMOs), the psychiatric foot soldiers of the war.  These GPs, with at most three months of psychiatric training under military auspices, made up 1,600 of the Army’s  2,400-member neuropsychiatry service (Am. J. Psychiatry., 103:580, 1946).

The GPs carried the psychiatric load, and by all accounts they did a remarkable job.  Of course, it was the psychoanalytic brass – William and Karl Menninger, Roy Grinker, John Appel, Henry Brosin, Franklin Ebaugh, and others – who wrote the papers and books celebrating psychiatry’s service to the nation at war.  But they all knew that the GPs were the real heroes.  John Milne Murray, the Army Air Force’s chief neuropsychiatrist, lauded them as the “junior psychiatrists” whose training had been entirely “on the job” and whose ranks were destined to swell under the VA program of postwar psychiatric care (Am. J. Psychiatry, 103:594, 1947).

The splendid work of the GMOs encouraged expectations that they would help shoulder the nation’s psychiatric burden after the war. The psychiatrist-psychoanalyst Roy Grinker, coauthor with John Spiegel of the war’s enduring  contribution to military psychiatry, Men Under Stress (1945), was under no illusion about the ability of trained psychiatrists to cope with the influx of returning GIs, a great many “angry, regressed, anxiety-ridden, dependent men” among them (Men Under Stress, p. 450).  “We shall never have enough psychiatrists to treat all the psychosomatic problems,” he remarked in 1946, when the American Psychiatric Association boasted all of 4,000 members.  And he continued:  “Until sufficient psychiatrists are produced and more internists and practitioners make time available for the treatment of psychosomatic syndromes, we must use heroic shortcuts in therapy which can be applied by all medical men with little special training” (Psychosom. Med., 9:100-101, 1947).

Grinker was seconded by none other than William Menninger, who remarked after the war that “the majority of minor psychiatry will be practiced by the general physician and the specialists in other fields” (Am. J. Psychiatry, 103:584, 1947).  As to the ability of stateside GPs to manage the “neurotic” veterans, Lauren Smith, Psychiatrist-in-Chief to the Institute of Pennsylvania Hospital prior to assuming his wartime duties, offered a vote of confidence two years earlier.  The majority of returning veterans would “present” with psychoneuroses rather than major psychiatric illness, and most of them “can be treated successfully by the physician in general practice if he is practical in being sympathetic and understanding, especially if his knowledge of psychiatric concepts is improved and formalized by even a minimum of reading in today’s psychiatric literature”  (JAMA, 129:192, 1945).

These appraisals, enlarged by the Freudian sensibility that saturated popular American culture in the postwar years, led to the psychiatrization of American general practice in the 1950s and 60s.  Just as the GMOs had been the foot soldiers in the campaign to manage combat stress, so GPs of the postwar years were expected to lead the charge against the ever growing number of “functional illnesses” presented by their patients (JAMA, 152:1192, 1953; JAMA, 156:585, 1954).  Surely these patients were not all destined for the analyst’s couch.  And in truth they were usually better off in the hands of their GPs, a point underscored by Robert Needles in his address to the AMA’s Section on General Practice in June of 1954.  When it came to functional and nervous illnesses, Needles lectured, “The careful physician, using time, tact, and technical aids, and teaching the patient the signs and meanings of his symptoms, probably does the most satisfactory job” (JAMA, 156:586, 1954).

Many generalists of the time, my father, William Stepansky, among them, practiced psychiatry.  Indeed they viewed psychiatry, which in the late 40s, 50s, and 60s typically meant psychoanalytically informed psychotherapy, as intrinsic to their work.  My father counseled patients from the time he set out his shingle in 1953.  Well-read in the psychiatric literature of his time and additionally interested in psychopharmacology, he supplemented medical school and internship with basic and advanced-level graduate courses on psychodynamics in medical practice.  Appointed staff research clinician at McNeal Laboratories in 1959, he conducted and published  (Cur. Ther. Res. Clin. Exp., 2:144, 1960) clinical research on McNeal’s valmethamide, an early anti-anxiety agent.  Beginning in the 1960s, he attended case conferences at Norristown State Hospital (in exchange for which he gave his services, gratis, as a medical consultant).  And he participated in clinical drug trials as a member of the Psychopharmacology Research Unit of the University of Pennsylvania’s Department of Psychiatry, sharing authorship of several publications that came out of the unit.  In The Last Family Doctor, my tribute to him and his cohort of postwar GPs, I wrote:

“The constraints of my father’s practice make it impossible for him to provide more than supportive care, but it is expert support framed by deep psychodynamic understanding and no less valuable to his patients owing to the relative brevity of 30-minute ‘double’ sessions.  Saturday mornings and early afternoons, when his patients are not at work, are especially reserved for psychotherapy.  Often, as well , the last appointment on weekday evenings is given to a patient who needs to talk to him.  He counsels many married couples having difficulties.  Sometimes he sees the husband and wife individually; sometimes he seems them together in couples therapy.  He counsels the occasional alcoholic who comes to him.  He is there for whoever seeks his counsel, and a considerable amount of his counseling, I learn from [his nurse] Connie Fretz, is provided gratis.”

To be sure, this was family medicine of a different era.  Today primary care physicians (PCPs) lack the motivation, not to mention the time, to become frontline psychotherapists.  Nor would their credentialing organizations (or their accountants) look kindly on scheduling double-sessions for office psychotherapy and then billing the patient for a simple office visit.  The time constraints under which PCPs typically operate, the pressing need to maintain practice “flow” in a climate of regulation, third-party mediation, and bureaucratic excrescences of all sorts – these things make it more and more difficult for physicians to summon the patience to take in, much less to co-construct and/or psychotherapeutically reconfigure, their patients’ illness narratives.

But this is largely beside the point.  Contemporary primary care medicine, in lockstep with psychiatry, has veered away from psychodynamically informed history-taking and office psychotherapy altogether.  For PCPs and nonanalytic psychiatrists alike – and certainly there are exceptions – the postwar generation’s mandate to practice “minor psychiatry,” which included an array of supportive, psychoeducative, and psychodynamic interventions, has effectively shrunk to the simple act of prescribing psychotropic medication.

At most, PCPs may aspire to become, in the words of Howard Brody, “narrative physicians” able to empathize with their patients and embrace a “compassionate vulnerability” toward their suffering.  But even this has become a difficult feat.  Brody, a family physician and bioethicist, remarks that respectful attentiveness to the patient’s own story or “illness narrative” represents a sincere attempt “to develop over time into a certain sort of person – a healing sort of person – for whom the primary focus of attention is outward, toward the experience and suffering of the patient, and not inward, toward the physician’s own preconceived agenda” (Lit. & Med., 13:88, 1994; my emphasis).  The attempt is no less praiseworthy than the goal.  But where, pray tell, does the time come from?  The problem, or better, the problematic, has to do with the driven structure of contemporary primary care, which makes it harder and harder for physicians to enter into a world of open-ended storytelling that over time provides entry to the patient’s psychological and psychosocial worlds.

Whether or not most PCPs even want to know their patients in psychosocially (much less psychodynamically) salient ways is an open question.  Back in the early 90s, primary care educators recommended special training in “psychosocial skills” in an effort to remedy the disinclination of primary care residents to address the psychosocial aspects of medical care.  Survey research of the time showed that most residents not only devalued psychosocial care, but also doubted their competence to provide it (J. Gen. Int. Med., 7:26, 1992; Acad. Med., 69:48, 1994).

Perhaps things have improved a bit since then with the infusion of courses in the medical humanities into some medical school curricula and focal training in “patient and relationship-centered medicine” in certain residency programs.   But if narrative listening and relationship-centered practice are to be more than academic exercises, they must be undergirded by a clinical identity in which relational knowing is constitutive, not superadded in the manner of an elective.  Psychodynamic psychiatry was such a constituent in the general medicine that emerged after World War II.  If it has become largely irrelevant to contemporary primary care, what can take its place?  Are there other pathways through which PCPs, even within the structural constraints of contemporary practice, may enter into their patients’ stories?

Copyright © 2011 by Paul E. Stepansky.  All rights reserved.