Category Archives: Medicalization

Will It Hurt?

“. . . the children’s population of this century has been submitted progressively as never before to the merciless routine of the ‘cold steel’ of the hypodermic needle.”  —Karl E. Kassowitz, “Psychodynamic Reactions of Children to the Use of Hypodermic Needles” (1958)

Of course, like so much medical technology, injection by hypodermic needle  has a prehistory dating back to the ancient Romans, who used metal syringes with disk plungers for enemas and nasal injections.  Seventeenth- and eighteenth-century physicians extended the sites of entry to the vagina and rectum, using syringes of metal, pewter, ivory, and wood.  Christopher Wren, the Oxford astronomer and architect, introduced intravenous injection in 1657, when he inserted a quill into the patient’s exposed vein and pumped in water, opium, or a purgative (laxative).

But, like so much medical technology, things only get interesting in the nineteenth century.  In the first half of the century, the prehistory of needle injection includes the  work of G. V. Lafargue, a French physician from the commune of St. Emilion.  He treated neuralgic (nerve) pain – his own included – by penetrating the skin with a vaccination lancet dipped in morphine and later by inserting solid morphine pellets under the skin through a large needle hole.  In 1844, the Irish physician Francis Rynd undertook injection by making a small incision in the skin and inserting a fine cannula (tube), letting gravity guide the medication to its intended site.[1]

The leap to a prototype of the modern syringe, in which a glass piston pushes medicine through a metal or glass barrel that ends in a hollow-pointed needle, occurred on two national fronts in 1853.  In Scotland, Alexander Wood,  secretary of Edinburgh’s Royal College of Physicians, injected morphine solution directly into his patients in the hope of dulling their neuralgias.  There was a minor innovation and a major one.  Wood used sherry wine as his solvent, believing it would prove less irritating to the skin than alcohol and less likely to rust his instrument than water.  And then the breakthrough:  He administered the liquid morphine through a piston-equipped syringe that ended in a pointed needle.  Near the end of the needle, on one side, was an  opening through which medicine could be released when an aperture on the outer tube was rotated into alignment with the opening.  It was designed and made by the London instrument maker Daniel Ferguson, whose “elegant little syringes,” as Wood described them, were intended to inject iron percholoride (a blood-clotting agent, or coagulant) into skin lesions and birthmarks in the hope of making them less unsightly.  It never occurred to him that his medicine-releasing, needle-pointed syringes could be used for subcutaneous injection as well.[2]

Across the channel in the French city of Lyon, the veterinary surgeon Charles Pravez employed a piston-driven syringe of his own making to inject iron percholoride into the blood vessels of sheep and horses.  Pravez was not interested in unsightly birthmarks; he was searching for an effective treatment for aneurysms (enlarged arteries, usually due to weakening of the arterial walls) that he thought could be extended to humans.  Wood was the first in print – his “New Method of Treating  Neuralgia by the Direct Application of Opiates to the Painful Points” appeared in the Edinburgh Medical & Surgical Journal in 1855[3] — and, shortly thereafter, he improved Ferguson’s design by devising a hollow needle that could simply be screwed onto the end of the syringe.  Unsurprisingly, then, he has received the lion’s share of credit for “inventing” the modern hypodermic syringe.  Pravez, after all, was only interested in determining whether iron percholoride would clot blood; he never administered medication through his syringe to animals or people.

Wood and followers like the New York physician Benjamin Fordyce Barker, who brought Wood’s technique to Bellevue Hospital in 1856, were convinced that the injected fluid had a local action on inflamed peripheral nerves.  Wood allowed for a secondary effect through absorption into the bloodstream, but believed the local action accounted for the injection’s rapid relief of pain.  It fell to the London surgeon Charles Hunter to stress that the systemic effect of injectable narcotic was primary.  It was not necessary, he argued in 1858, to inject liquid morphine into the most painful spot; the medicine provided the same relief when injected far from the site of the lesion.  It was Hunter, seeking to underscore the originality of his approach to injectable morphine, especially its general therapeutic effect, who introduced the term “hypodermic” from the Greek compound meaning “under the skin.”[4]

It took time for the needle to become integral to doctors and doctoring.  In America, physicians greeted the hypodermic injection with skepticism and even dread, despite the avowals of patients that injectable morphine provided them with instantaneous, well-nigh miraculous relief from chronic pain.[5]  The complicated, time-consuming process of preparing injectable solutions prior to the manufacture of dissolvable tablets in the 1880s didn’t help matters.  Nor did the trial-and-error process of arriving at something like appropriate doses of the solutions.  But most importantly, until the early twentieth century, very few drugs were injectable.  Through the 1870s, the physician’s injectable arsenal consisted of highly poisonous (in pure form) plant alkaloids such as morphine, atropine (belladonna), strychnine, and aconitine, and, by decade’s end, the vasodilator heart medicine nitroglycerine.  The development of local and regional anesthesia in the mid-1880s relied on the hypodermic syringe for subcutaneous injections of cocaine solution, but as late as 1905, only 20 of the 1,039 drugs in the U.S. Pharmacopoeia were injectable.[6]  The availability of injectable insulin in the early 1920s heralded a new, everyday reliance on hypodermic injections, and over the course of the century, the needle, along with the stethoscope, came to stand in for the physician.  Now, of course, needles and doctors “seem to go together,” with the former signifying “the power to heal through hurting” even as it “condenses the notions of active practitioner and passive patient.”[7]

The child’s fear of needles, always a part of pediatric practice, has generated a literature of its own.  In the mid-twentieth century, in the heyday of Freudianism, children’s needle anxiety gave rise to psychodynamic musings.  In 1958, Karl Kassowitz of Milwaukee Children’s Hospital made the stunningly commonsensical observation that younger children were immature and hence more anxious about receiving injections than older children.  By the time kids were eight or nine, he found, most had outgrown their fear.  Among the less than 30% who hadn’t, Kassowitz gravely counseled, continuing resistance to the needle might represent “a clue to an underlying neurosis.”[8]  Ah, the good old Freudian days.

In the second half of the last century, anxiety about receiving injections was “medicalized” like most everything else, and in the more enveloping guise of BII (blood, injection, injury) phobia, found its way into the fourth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual in 1994. Needle phobia thereupon became the beneficiary of all that accompanies medicalization – a specific etiology, physical symptoms, associated EKG and stress hormone changes, and strategies of management.  The latter are impressively varied and range across medical, educational, psychotherapeutic, behavioral, cognitive-behavioral, relaxation, and desensitizing approaches.[9]  Recent literature also highlights the vasovagal reflex associated with needle and blood phobia.  Patients confronted with the needle become so anxious that an initial increase in heart rate and blood pressure is followed by a marked drop,   as a result of which they become sweaty, dizzy, pallid, nauseous (any or all of the above), and sometimes faint (vasovagal syncope).  Another interesting finding is that needle phobia (especially in its BII variant) along with its associated vasovagal reflex probably have a genetic component, as there is a much higher concordance within families for BII phobia than other kinds of phobia. Researchers who study twins put the heritability of BII phobia at around 48%.[10]

Needle phobia is still prevalent among kids, to be sure, but it has long since matured into a fully grown-up condition. Surveys find injection phobia in anywhere from nine to 21% of the the general population and even higher percentages of select populations, such as U.S. college communities.[11]               A study by the Dental Fears Research Clinic of the University of                               Washington in 1995 found that over a quarter of surveyed students and university employees were fearful of dental injections, with 5% admitting they avoided or canceled dental appointments out of fear.[12]  Perhaps some of these needlephobes bear the scars of childhood trauma.  Pediatricians now urge control of the pain associated with venipuncture and intravenous cannulation (tube insertion) in infants, toddlers, and young children, since there is evidence such procedures can have a lasting impact on pain sensitivity and tolerance of needle picks.[13]

But people are not only afraid of needles; they also overvalue them and seek them out.  Needle phobia, whatever its hereditary contribution, is a creation of Western medicine.  The surveys cited above come from the U.S., Canada, and England.  Once we shift our gaze to developing countries of Asia and Africa we behold a different needle-strewn landscape.  Studies attest not only to the high acceptance of the needle but also to its integration into popular understandings of disease.  Lay people in countries such as Indonesia, Tanzania, and Uganda typically want injections; indeed, they often insist on them because injected medicines, which enter the bloodstream directly and (so they believe) remain in the body longer, must be more effective than orally injected pills or liquids.

The strength, rapid action, and body-wide circulation of injectable medicine – these things make injection the only cure for serious disease.[14]  So valued are needles and syringes in developing countries that most lay people, and even Registered Medical Practitioners in India and Nepal, consider it wasteful to discard disposable needles after only a single use.  And then there is the tendency of people in developing countries to rely on lay injectors (the “needle curers” of Uganda; the “injection doctors” of Thailand; the informal providers of India and Turkey) for their shots.  This has led to the indiscriminate use of  penicillin and other chemotherapeutic agents, often injected without attention to sterile procedure.  All of which contributes to the spread of infectious disease and presents a major headache for the World Health Organization.

The pain of the injection?  Bring it on.  In developing countries, the burning sensation that accompanies many injections signifies curative power.  In some cultures, people also welcome the pain as confirmation that real treatment has been given.[15]  In pain there is healing power.  It is the potent sting of modern science brought to bear on serious, often debilitating disease.  All of which suggests the contrasting worldviews and emotional tonalities collapsed into the fearful and hopeful question that frames this essay:  “Will it hurt?”


[1] On the prehistory of hypodermic injection, see D. L. Macht, “The history of intravenous and subcutaneous administration of drugs,” JAMA, 55:856-60, 1916; G. A. Mogey, “Centenary of Hypodermic Injection,” BMJ, 2:1180-85, 1953; N. Howard-Jones, “A critical study of the origins and early development of hypodermic medication,” J. Hist. Med., 2:201-49, 1947 and N. Howard-Jones, “The origins of hypodermic medication,” Scien. Amer., 224:96-102, 1971.

[2] J. B. Blake, “Mr. Ferguson’s hypodermic syringe,” J. Hist. Med., 15: 337-41, 1960.

[3] A. Wood, “New method of treating neuralgia by the direct application of opiates to the painful points,” Edinb. Med. Surg. J., 82:265-81, 1855.

[4] On Hunter’s contribution and his subsequent vitriolic exchanges with Wood over priority, see Howard-Jones, “Critical Study of Development of Hypodermic Medication,” op cit.  Patricia Rosales provides a contextually grounded discussion of the dispute and the committee investigation of Edinburgh’s Royal Medical and Chirurgical Society to which it gave rise.  See P. A. Rosales, A History of the Hypodermic Syringe, 1850s-1920s.  Unpublished doctoral dissertation, Department of the History of Science, Harvard University, 1997, pp. 21-30.

[5] See Rosales, History of Hypodermic Syringe, op. cit., chap. 3, on the early reception of hypodermic injections in America.

[6] G. Lawrence, “The hypodermic syringe,” Lancet, 359:1074, 2002; J. Calatayud & A. Gonsález, “History of the development and evolution of local anesthesia since the coca leaf,” Anesthesiology, 98:1503-08, 2003, at p. 1506; R. E. Kravetz, “Hypodermic syringe,” Am. J. Gastroenterol., 100:2614-15, 2005.

[7] A. Kotwal, “Innovation, diffusion and safety of a medical technology: a review of the literature on injection practices,”  Soc. Sci. Med., 60:1133-47, 2005, at p. 1133.

[8] Kassowitz, “Psychodynamic reactions of children to hypodermic needles,”  op. cit., quoted at p. 257.

[9] Summaries of the various treatment approaches to needle phobia are given in J. G. Hamilton, “Needle phobia:  a neglected diagnosis,” J. Fam. Prac., 41:169-75 ,1995  and H. Willemsen, et al., “Needle phobia in children:  a discussion of aetiology and treatment options, ”Clin. Child Psychol. Psychiatry, 7:609-19, 2002.

[10] Hamilton, “Needle phobia,” op. cit.; S. Torgersen, “The nature and origin of common phobic fears,” Brit. J. Psychiatry, 134:343-51, 1979; L-G. Ost, et al., “Applied tension, exposure in vivo, and tension-only in the treatment of blood phobia,” Behav. Res. Ther., 29:561-74, 1991;  L-G. Ost, “Blood and injection phobia: background and cognitive, physiological, and behavioral variables,” J. Abnorm. Psychol., 101:68-74, 1992.

[11] References to these surveys are provided by Hamilton, “Needle phobia,” op. cit.

[12] On the University of Washington survey, see P. Milgrom, et al., “Four dimensions of fear of dental injections,” J. Am. Dental Assn., 128:756-66, 1997 and T. Kaakko, et al., “Dental fear among university students: implications for pharmacological research,” Anesth. Prog., 45:62-67, 1998.  Lawrence Prouix reported the results of the survey in The Washington Post under the heading “Who’s afraid of the big bad needle?” July 1, 1997, p. 5.

[13] R. M. Kennedy, et al., “Clinical implications of unmanaged need-insertion pain and distress in children,” Pediatrics, 122:S130-S133, 2008.

[14] See Kotwal, “Innovation, diffusion and safety of a medical technology,” op. cit., p. 1136 for references.

[15] S. R. Whyte & S. van der Geest, “Injections: issues and methods for anthropological research,” in N. L. Etkin & M. L. Tan, eds., Medicine, Meanings and Contexts (Quezon City, Philippines: Health Action Information Network, 1994), pp. 137-8.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Advertisements

Naming the Pain

I always begin my “Medicine and Society” seminar by asking the students to identify as many of the following terms as they can and then to tell me what they have in common: nostalgie; railway spine; soldier’s heart (aka effort syndrome or Da Costa’s syndrome); puerperal insanity; neurasthenia; hyperkinetic syndrome; irritable bowel syndrome; ADHD; chronic fatigue syndrome; and fibromyalgia.

The answer, of course, is that they are not diseases at all but broadly descriptive syndromes based on self-reports. In each and every case, physicians listen to what patients (or, in the case of children, parents or teachers) tell them, and then give a diseaselike name to a cluster of symptoms for which there is no apparent biomedical explanation.

The fact that such conditions have existed throughout history and the appreciable symptomatic overlap among them surprise some students but not others.  I like to point out especially the virtual mapping of neurasthenia, a syndrome “identified” by the pioneer American neurologist George Beard in 1869, on to contemporary notions of chronic fatigue syndrome and fibromyalgia.  For Beard the symptoms of neurasthenia include mental and physical fatigue, insomnia, headache, general muscular achiness, irritability, and inability to concentrate.  What have we here if not the symptoms of chronic fatigue syndrome and fibromyalgia, with the only difference, really, residing in the biomedically elusive cause of the symptoms.  Whereas neurasthenia was attributed by Beard and other nineteenth-century neurologists to nervous weakness, i.e., “debility of the nerves,” the contemporary variants are ascribed to a heretofore undetected low-grade virus.

The same mapping chain applies to stomach and digestive discomfort.  Long before the arrival of “irritable bowel syndrome,” for which there has never been a basis for differential diagnosis, there were terms like enteralgia, adult colic, and that wonderfully versatile eighteenth- and nineteenth-century medical-cum-literary condition, dyspepsia.   Before children were “diagnosed” with ADHD, they were given the diagnosis ADD (attention deficit disorder) or MBD (minimal brain dysfunction), and before that, beginning in the early 1960s, the same kids were diagnosed with hyperactivity (hyperkinetic syndrome).  Military medicine has its own chronology of syndromal particulars.  Major building blocks that lead to our understanding of combat-related stress and its sequelae include PTSD, during America’s war in Vietnam; combat fatigue and war neurosis during WWII; shell shock and “soldier’s heart” during WWI; and nostalgie during the Napoleonic Wars and American Civil War.

The functionality of neurasthenia and its modern descendants is that they are symptomatically all-inclusive but infinitely plastic in individual expression.  It is still a blast to read Beard’s dizzying catalog of the symptoms of neurasthenia in the preface to Modern American Nervousness (1881).  Here is a small sampling:

Insomnia; flushing; drowsiness; bad dreams; cerebral irritation; dilated pupils; pain, pressure, and heaviness in the head; changes in the expression of the eye; asthenopia [eye strain]; noises in the ears; atonic voice; mental irritability; tenderness of the teeth and gums; abnormal dryness of the skin, joints, and mucous membranes; sweating hands and feet with redness; cold hands and feet; pain in the feet; local spasms of muscles; difficulty swallowing; convulsive movements; cramps; a feeling of profound exhaustion; fear of lightning; fear of responsibility; fear of open places or closed places; fear of society; fear of being alone; fear of fears; fear of contamination; fear of everything.[1]

This same model, if less extravagant in reach, pertains to chronic fatigue syndrome and fibromyalgia.  Any number of symptoms point to these conditions, but no single clustering of symptoms is essential to the diagnosis or able to rule it out.  In the world of global syndromes, the presence and absence of specific symptoms serve equally well as diagnostic markers.[2]

As many historians have pointed out, the development and marketing of new drugs plays a significant role in the labeling (and hence medicalizing) of these syndromes.  From time immemorial, young children, especially boys, have had a hard time sitting still in school and focusing on the task before them.  But it was only with the release of Ritalin (methylphenidate) in the early 1960s that these time-honored developmental lags (or were they simply developmental realities?) were gathered into the diagnosis “hyperkinetic syndrome.”

Psychiatry has been especially willing to accommodate the drug-related charge to syndromize new variants of existing syndromes. “Panic disorder” became a syndrome only after Upjohn released a new benzodiazepine, alprazolam, for which it sought a market within the broad universe of anxiety sufferers.  Conveniently, the release of the drug in 1980 coincided with the release of the 3rd edition of the American Psychiatric Association’s Diagnostic and Statistical Manual (DSM), which obligingly found a place for “panic disorder” (and hence alprazolam) in its revised nosology.  DSM-III was no less kind to Pfizer; it helped the manufacturer find a market for its newly released MAOI antidepressant, phenelzine, by adding “social phobia” to the nomenclature.[3]

Critics write about the creeping medicalization of virtually every kind of  discomfort, dis-ease, despondency, dysfunction, and dysphoria known to humankind.[4]  As a society, we are well beyond medicalizing the aches and pains that accompany everyday life.  We are at the point of medicalizing life itself, especially the milestones that punctuate the human life cycle. This is the viewpoint especially of French social scientists influenced by Foucault, who seem to think that the “medicalization” of  conception (via, for example, embryo freezing) or the “pharmacologization” of menopause via hormone therapy takes us to the brink of a new de-naturalizing control of biological time.[5]

I demur.  What we see in medicalization is less the “pathologization of existence”[6] than a variably successful effort to adapt to pain and malaise so that life can be lived.  The adaptation resides in the bi-level organizing activity that fuels and sustains the labeling process.  It is comforting to group together disparate symptom clusters into reified entities such as neurasthenia or chronic fatigue syndrome or fibromyalgia.  It is also comforting, even inspiriting, to associate with fellow sufferers, even if their fibromyalgia manifests itself quite differently than yours.  All can organize into support groups and internet self-help communities and fight for recognition from organized medicine and society at large.  These people do something about their pain.  And they often feel better, even if they still hurt all over.  If nothing else, they have accessed a collective illness identity that mitigates self-doubts and alienation.[7]

But there may be other ways of adapting to chronic pain amplified, all too often, by chronic misery that veers into psychiatric co-morbidity.  The problem with adapting to nonspecific suffering by labeling, medicalization, support groups, self-help literature, and the like is the pull to go beyond living with pain to living through the illness identity constructed around the pain.  For some there are other possibilities.  There is psychotherapy to address the misery, which may or may not prove helpful.  There is the stoic resolve, more typical of the nineteenth century, to live with pain without collapsing one’s identity into the pain and waiting for the rest of the world to acknowledge it.[8] There is the pursuit of symptomatic relief unburdened by illness identity and the existential angst  that accompanies it.  And there is a fourth way that leads back to my father’s medicine.

In the post-WWII era, there were no support groups or self-help literature or internet communities to validate diffuse syndromal suffering.  But there were devoted family physicians, many of whom, like William Stepansky, were psychiatrically oriented and had the benefit of postgraduate psychiatric training.  A caring physician can validate and “hold” a patient’s pain without assigning the pain a label and trusting the label to mobilize the patient’s capacity to self-soothe.  He or she can say medically knowledgeable things about the pain (and its palliation) without “medicalizing” it in the biomedically reductive, remedy-driven sense of our own time.

Primary care physicians who listen to their patients long enough to know them and value them become partners in suffering.  They suffer with their patients not in the sense of feeling their pain but in the deeper sense of validating their suffering, both physical and mental, by situating it within a realm of medical understanding that transcends discrete medical interventions.

Am I suggesting that fibromyalgia sufferers would be better off if they had primary care physicians who, like my father, had the time and inclination to listen to them in the manner of attuned psychotherapists?  You bet I am.  The caring associated with my father’s medicine, as I have written, relied on the use of what psychoanalysts term “positive transference,” but absent the analytic goal of  “resolving” the transference (i.e., of analyzing it away) over time.[9]  Treating patients with chronic pain – whether or not syndromal – means allowing them continuing use of the physician in those ways in which they need to use him.

A parental or idealizing transference, once established, does two things.  It intensifies whatever strategies of pain management the physician chooses to pursue, and it provides the physician with relational leverage for exploring the situational and psychological factors that amplify the pain.  Of course, the general physician’s willingness to be used thusly is a tall order, especially in this day and age.  It signifies a commitment to holistic care-giving over time, so that issues of patienthood morph into issues of suffering personhood.  My father’s psychological medicine – of which contemporary notions of patient- and relationship-centered care are pale facsimiles — could not eliminate the pain of his syndromal sufferers.  But it provided them with a kind of support (and, yes, relief) that few contemporary sufferers will ever know.


[1] G. M. Beard, American Nervousness, Its Causes and Consequences (NY: Putnam, 1881), pp. viii-ix.

[2] K. Barker, “Self-help literature and the making of an illness identity: the case of fibromyalgia syndrome (FMS),” Social Problems, 49:279-300, 2002.

[3] D. Healy, The Anti-Depressant Era (Cambridge: Harvard University Press, 1997), pp. 187-189.

[4] E.g., Peter Conrad, The Medicalization of Society: On the Transformation of Human Conditions into Treatable Disorders (Baltimore: Johns Hopkins, 2007).

[5] A. J. Suissa, “Addiction to cosmetic surgery: representations and medicalization of the body,” Int. J. Ment. Health Addiction, 6:619-630, 2008, at p. 620.

[6] R. Gori & M. J. Volgo, La Santé Totalitaire: Essai sur la Medicalization de l’Existence (Paris: Denoël, 2005).

[7] Barker, “Self-help literature and the making of an illness identity,” op. cit.

[8] There are some powerful examples of such adaptation to pain in S. Weir Mitchell’s Doctor and Patient, 2nd edition (Phila: Lippincott, 1888), pp. 83-100.

[9] P. E. Stepansky, The Last Family Doctor: Remembering My Father’s Medicine (Keynote, 2011), p. 86.  More than a half century ago, the American psychoanalyst Leo Stone wrote of “the unique transference valence of the physician.”  Patients formed transference bonds with their analysts partly because the latter, as physicians, were beneficiaries of “the original structure of the patient-doctor relationship.”  Small wonder that Stone deemed the physician’s role “the underlying definite and persistent identity which is optimum for the analyst.”  L. Stone, The Psychoanalytic Situation: An Examination of Its Development and Essential Nature (NY: International Universities Press, 1961), pp. 17, 15, 41.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.