Category Archives: Medical Technology, Old & New

Telemedicine Rising

In a “Viewpoint” published in JAMA a month ago,[1] Michael Nochomovitz and Rahul Sharma suggest that the time has come to create a new medical specialty: virtual medicine.  Extrapolating from the manner in which medical specialties have traditionally arisen (viz., “by advances in technology and expansion of knowledge in care delivery”), they submit that telemedicine has advanced to the point of providing the basis for a new kind of specialty care.  Telemedicine, as they define it, comprises various web-based telecommunications modalities, social media, teleconferencing, video face-to-face communications with patients, among them.  They place before us  medical virtualists, physicians who “will spend the majority or all of their time caring for patients using a virtual medium.”

Unlike today’s physicians, who make use of this or that “virtual medium” haphazardly and without formal training, the virtualist will achieve a set of “core competencies” through formal training.  Their curriculum for certification, according to the authors, should include “knowledge of legal and clinical limitations of virtual care, competencies in virtual examination using the patient or families, ‘virtual visit presence training,’ inclusion of on-site clinical measurements, as well as continuing education.” Among the techniques in their arsenal will be those aimed at achieving “good webside manner” (authors’ italics).

Now, far be it from me to discourage the use of  remote technologies to render health care delivery more efficient and especially to bring primary care to underserved communities. The value of “remote surgery” that ranges from telementoring and remote guidance to actual robotic operations is well-documented.  But is a new medical virtualism specialty really in our best interest?  Certainly,  telemedicine will play an  increasing role in medicine; the question is whether this “role” should become the basis of a bounded specialty.  This would make the medical virtualist the first medical practitioner whose practice excluded (or drastically marginalized) patients, making it radically different from nonpractice specialties such as pathology or diagnostic radiology.

It is problematic especially in the cognitive specialties.  We would have a subspecies of primary care doctors who specialized in care that was patient-uncentered, i.e., that was premised on the self-sufficiency of piece-person care as opposed to whole-person care.  The proposal takes the current fragmentation of care among subspecialists and refashions it into a virtue.  That is, we will have virtuous physicians who only practice virtual medicine and feel good about doing so.  Such care differs from subspecialty care in a key respect:  we typically see our subspecialists in the flesh.  We can ask them questions, demand explanations, and criticize them for not giving us the time and attention we seek.  In the absence of adequate time and attention, we can seek out a different subspecialist who is more patient-centered and welcoming.   With the medical virtualist, on the other hand, dehumanization is integral to the specialty itself.  The patient has no recourse; he is outside the virtualist’s purview altogether.

It is striking that the issue of patient trust is nowhere mentioned in the article,   even though empirical research suggests that trust is the “basic driver” of patient satisfaction.  It has been linked to less treatment anxiety, greater pain tolerance, and greater compliance.[2]  But the authors subordinate all such issues to their focus on efficiency and ease of use.   As such, their case rests on the assumption that informed the patient rights movement of the 1970s and ’80s:  that patients are simply consumers in search of a commodity.  Now, a half century after passage of the Patient’s Bill of Rights, the commodity is increasingly mediated by technology.[3]  And “the success of technology-based services,” according to the authors, “is not determined by hardware and software alone but by ease of use, perceived value, and workflow optimization.”  The need to humanize the delivery of technology, to convey to the patient some sense of what I have termed “caring technology,”[4] falls outside a conversation framed in terms of consumerist values.

But once we factor trust into the equation, we open a can of worms.  For patient trust implicates the doctor’s touch, which includes both the laying on of hands and the implementation of office-based procedures.  It also implicates human qualities such as caring, empathy, and the willingness to tolerate ambiguity.  Finally, it puts us in contact with the  Hippocratic Oath, in which ethical obligations revolve entirely around physicians treating patients who are full-fledged human beings, fellow sufferers.  This is why Jennifer Edgoose and Julian Edgoose, writing in a recent issue of Annals of Family Medicine about “Finding Hope in the Face-to-Face,” begin with this sentence:  “The daily work of clinicians is conducted in face-to-face encounters, whether in exam rooms, homes, or alongside hospital beds, but little attention has been paid to the responsibilities and ethical implications generated by this dimension of our relational work.”[5]  Among these implications is the physician’s obligation not merely to be an instrument of diagnosis and treatment, but also to contain the patient’s “wounded humanity” in the sense of Pellegrino.[6]

I wrote In the Hands of Doctors precisely to explore, both historically and in the present, this dimension of physicians’ “relational work,” including the better and worse ways in which it can appropriate technologies that are not only sought after by patient-consumers, but viewed as remote and intimidating by patient-persons.  Physicians who know their patients as wounded and vulnerable can humanize technology by pulling it into a trusting doctor-patient relationship.

These thoughts are a counterpoise to the authors’ brief for “the medical virtualist.”  Their proposal is provocative and troubling.  It inverts figure and ground, so that telemedicine, heretofore an adjunct to face-to-face care, becomes the ground of a specialty in which face-to-face care is incidental at best.  In the domain of primary care, it segues into the philosophical question of who or what primary care virtualists are being trained to care for.  Can one be a primary care physician of any type and care for some “thing” other than whole persons?  The status of virtualism in surgical specialties is no doubt different.

I invite others to reply to this posting with their thoughts on a topic that will only grow in importance in the years ahead.

_______________________

[1] Michael Nochomovitz & Rahul Sharma, “Is It Time for a New Medical Specialty?  The Medical Virtualist,” JAMA, 319:437-438, 2018.

[2] Paul E. Stepansky, In the Hands of Doctors: Touch and Trust in Medical Care (Montclair: Keynote, 2017), 21 and references cited therein.

[3] Stepansky, In the Hands of Doctors, 133-135

[4] Stepansky, In the Hands of Doctors, 82-98.

[5] Jennifer Y. C. Edgoose & Julian M. Edgoose, “Finding Hope in the Face-to-Face,” Ann. Fam. Med., 15:272-274, 2017.

[6] E. D. Pellegrino, Humanism and the Physician (Knoxville:  University of Tennessee Press, 1979), 124, 146, 184, and passim.

 

It Was All About the Pain

“. . . and although the patient had long been a sufferer from dyspnea, chronic bronchitis, and embarrassed heart, we believed that the almost miraculous resurrection which took place would be permanent.  He died, however, on the second day.”   — Cameron MacDowall, “Intra-Peritoneal Injections in Cholera” (1883)[1]

Among the early British and American proponents of subcutaneous hypodermic injection, especially of liquefied morphine, the seeming miracle of instantaneous pain relief sufficed to bring physician and patient into attitudinal alignment.  We are a century removed from the psychoanalytic sensibility that encouraged physicians to explore the personal side of hypodermic injection and to develop strategies for overcoming patients’ anxieties about needle puncture, their “needle phobia.”

There is no need to read between the lines of nineteenth-century clinical reports to discern the convergence of physician delight and patient amazement at the immediate relief provided by hypodermic injection.  The lines themselves tell the story, and the story is all about the pain.  Patients who received hypodermic injections in the aftermath of Alexander Wood’s successful use of Daniel Ferguson’s  “elegant little syringe” were often in extremis.  Here is a woman of 40, who presented with a case of acute pleurisy (inflammation of the membrane around the lungs) in 1867:

The pain was most intense; great dyspnea [difficulty breathing] existed; sharp, lancinating pains at each rapid inspiration completely prostrated the patient, whose sufferings had been continuous for twelve hours.  About one-sixth of a grain of the acetate of morphia was used hypodermically, and with prompt relief, a few minutes only elapsing after its injection before its beneficial results followed.  The ordinary treatment being continued, a recovery was effected in a short time.[2]

Consider this “delicate elderly spinster” of 1879, who presented to her physician thusly:

I found her nearly unconscious, cramped all over body and legs, vomiting violently every minute or two, purging every few minutes, the purging being involuntary and under her.  She was showing the whites of the eyes, and the countenance was changed.  She was certainly all but gone.  Gave at once two-fifths of a grain of sulphate of morphia hypodermically.  She did not feel the prick of the needle in the least.[3]

And here is a surgeon from Wales looking in on a 48-year old gardener in severe abdominal pain at the Crickhowell Dispensary on August 1, 1882:

On my visiting him at 11:30 on the morning of the above date, I found him in great agony, in which condition his wife informed me he had been during the greater part of the previous night.  He implored me to do something for relief, saying he could endure the suffering no longer; and as I happened to have my hypodermic syringe in my pocket, I introduced into his arm four minims of a solution of acetate of morphia.  I then left him.[4]

A bit better off, one supposes – if only a bit – were patients who suffered  severe chronic pain, whether arthritic, gastrointestinal,  circulatory, or cancerous in nature.  They too were beneficiaries of the needle.  We encounter a patient with “the most intense pain in the knee-joint” owing to a six-year-long attack of gout.  Injection of a third of a grain of acetate of morphia was followed by “the most delightful results,” with “the patient expressing himself in glowing terms as to the efficacy and promptness of this new remedy.”  Instantaneous relief, compliments of the needle, enabled him to turn the corner; he “rallied rapidly, having none of the depression and debilitating effects, the resultant of long-continued pain, to recover from, as in former times.”[5]

So it was with patients with any number of ailments, however rare or nebulous in nature.  A 31-year-old woman was admitted to Massachusetts General Hospital in 1883 with what her physician diagnosed as multiple sarcomas (malignant skin tumors) covering her upper arms, breasts, and abdomen; she was given subcutaneous injections of Fowler’s Solution, an eighteenth-century tonic that was one percent arsenic.  Discharged from the hospital two weeks later, she self-administered daily injections of Fowler’s for another five months, by which time the lesions had cleared completely; a year later she remained “perfectly well to all appearance.”  In the 1890s, the decade when subcutaneous injections of various glandular extracts gripped the clinical imagination, it is hardly surprising to read that injection of liquefied gray matter of a sheep’s brain did remarkable things for patients suffering from nervous exhaustion (neurasthenia).  Indeed, its tonic effect comprised “increase of weight, appetite and weight, restoration of spirits and bien-être, disappearance of pain, sexual impotence and insomnia.”  At the other end of the psychophysical spectrum, patients who became manic, even violently delirious, during their bouts with acute illnesses such as pneumonia or rheumatic fever, “recovered in the ordinary way” after one or more injections of morphia, sometimes in conjunction with inhaled chloroform.[6]

Right through century’s end, the pain of disease was compounded by the pain of pre-injection treatment methods.  What the Boston surgeon Robert White, one of Wood’s first American followers, termed the “revolution in the healing art” signaled by the needle, addressed both poles of suffering.  Morphia’s “wonderful effects” on all kinds of pain — neuralgic pain, joint pain, digestive pain (dyspepsia), the pain of tumors and blockages – were heightened by the relative painlessness of injection.  Indeed, the revolutionary import of hypodermic injection, according to White, meant that “The painful and decidedly cruel endermic mode of applying medicines [i.e., absorption through the skin] may be entirely superseded, and the pain of a blistered surface completely avoided.”[7]  When it came to hemorrhoids, carbuncles, and small tumors, not to mention “foul and ill-conditioned ulcers,” hypodermic injections of carbolic acid provided “the only absolute and painless cure [original emphasis] of these exceedingly painful affections.”[8]

And what of the pain of the injection itself?  When it rates mention, it is only to put it in perspective, to underscore that “some pain at the moment of injection” gives way to “great relief from the pain of the disease” – a claim which, in this instance, pertained to alcohol solution injected in and around malignant tumors.[9]  Very rarely indeed does one find references to the pain of injection as a treatment-related consideration.[10]

Recognition of the addictive potential of repeated morphine injections barely dimmed the enthusiasm of many of the needle’s early proponents. Then, as now, physicians devised rationalizations for preferred treatment methods despite well-documented grounds for concern. They carved out diagnostic niches that, so they claimed, were exempt from mounting evidence of addiction.  A Melbourne surgeon who gave morphine injections to hospitalized parturients suffering from “puerperal eclampsia” (convulsions and coma following childbirth) found his patients able “to resist the dangerous effects of the drug; it seems to have no bad consequences in cases, in which, under ordinary circumstances, morphia would be strongly contra-indicated.” A physician from Virginia, who had treated puerperal convulsions with injectable morphine for 16 years, seconded this view.  “One would be surprised to see the effect of morphine in these cases,” he reported in 1887.  It was “as if bringing the dead to life.  It does not stupefy the patients, but renders them brighter.”[11]  A British surgeon stationed in Burma “cured” a patient of tetanus with repeated injections of atropine (belladonna), and held that his case “proved” that tetanus “induced” a special tolerance to an alkaloid known to have serious, even life-threatening, side effects.[12]  Physicians and patients alike stood in awe before a technology that not only heightened the effectiveness of the pharmacopeia of the time but also brought it to bear on an extended range of conditions.

Even failure to relieve suffering or postpone death spoke to the importance of hypodermic injection.  For even then, injections played a critical role in differential diagnosis: they enabled clinicians to differentiate, for example, “choleric diarrhea,” which morphine injections greatly helped, from, respectively, “malignant” (or Asiatic) cholera and common dysentery, which they helped not at all.[13]

To acknowledge that not all injections even temporarily relieved suffering or that not all injections were relatively painless was, in the context of nineteenth-century therapeutics, little more than a footnote.  Of course this was the case.  But it didn’t seem to matter.  There was an understandable wishfulness on the part of nineteenth-century physicians and patients about the therapeutic benefits of hypodermic injection per se, and this wishfulness arose from the fact that, prior to invention of the hypodermic syringe and soluble forms of morphine and other alkaloids, “almost miraculous resurrection” from intractable pain was not a possibility, or at least not a possibility arising from a physician’s quick procedural intervention.

For those physicians who, beginning in the late 1850s, began injecting morphine and other opioids to relieve severe pain, there was something magical about the whole process – and, yes, it calls to mind the quasi-magical status of injection and injectable medicine in some developing countries today.  The magic proceeded from the dramatic pain relief afforded by injection, certainly.  But it also arose from the realization, per Charles Hunter, that an injected opioid somehow found its way to the site of pain regardless of where it was injected.  It was pretty amazing.

The magic, paradoxically, derived from the new scientific understanding of medicinal therapeutic action in the final three decades of the nineteenth century.  The development of hypodermic injection is a small part of the triumph of scientific medicine, of a medicine of specific remedies for specific illnesses, of remedies increasingly developed in laboratories but bringing the fruits of laboratory science to the bedside.  We see the search for specific remedies in early trial-and-error efforts to find specific injectables and specific combinations of injectables for specific conditions – carbolic acid for hemorrhoids and carbuncles; morphine and atropia (belladonna) for puerperal convulsions; whisky and water for epidemic cholera; alcohol for tumors; ether for sciatica; liquefied sheep’s brain for nervous exhaustion; and on and on.

This approach signifies a primitive empiricism, but it is a proto-scientific empiricism nonetheless.  The very search for injectables specific to one or another condition is worlds removed from the Galenic medicine of the 1830s and ’40s, according to which all diseases were really variations of a single disease that had to do with the degree of tension or excitability in the blood vessels.

Despite the paucity of injectable medicines into the early twentieth century, hypodermic injection caught on because, despite the fantastical claims (to our ears) that abound in nineteenth-century medical journals, it was aligned with scientific medicine in ascendance.  Yes, the penetration of the needle was merely subcutaneous, but skin puncture was a portal to the blood stream and to organs deep inside the body.  In this manner, hypodermic injection partook of the exalted status of “heroic surgery” in the final quarter of the nineteenth century.[14]  The penetration of the needle, shallow though it was, stood in for a bold new kind of surgery, a surgery able to penetrate to the very anatomical substrate of human suffering.  Beginning in the late 1880s, certain forms of major surgery became recognizably modern, and the lowly needle was along for the ride.  The magic was all about the pain, but it was also all about the science.


[1] C. MacDowall, “Intra-peritoneal injections in cholera,” Lancet, 122:658-59, 1883, quoted at 658.

[2] T. L. Leavitt, “The hypodermic injection of morphia in gout and pleurisy,” Amer. J. Med. Sci., 55:109, 1868.

[3] W. Hardman, “Treatment of choleraic diarrhea by the hypodermic injection of morphia,” Lancet, 116:538-39, 1880, quoted at 539.

[4] P. E. Hill, “Morphia poisoning by hypodermic injection; recovery,” Lancet, 120:527-28, 1882, quoted at 527.

[5] Leavitt, “Hypodermic injection of morphia in gout and pleurisy,” op. cit.

[6] F. C. Shattuck, “Multiple sarcoma of the skin: treatment by hypodermic injections of Fowler’s solution; recovery,” Boston Med. Surg. J., 112:618-19, 1885; N.A., “Treatment of neurasthenia by transfusion (hypodermic injection) of nervous substance,” Boston Med. Surg. J., 126:273-74, 1892, quoted at 274; T. Churton, “Cases of acute maniacal delirium treated by inhalation of chloroform and hypodermic injection of morphia,” Lancet, 141:861-62, 1893.

[7] R. White, “Hypodermic injection of medicine, with a case,” Boston Med. Surg. J., 61:289-292, 1859, quoted at 290.

[8] N. B. Kennedy, “Carbolic acid injections in hemorrhoids and carbuncles,” JAMA, 6:529-30, 1886.

[9] E. Andrews, “The latest methods of treating carcinoma by hypodermic injection,” JAMA, 26:1159-60, 1897, quoted at 1159.

[10] For one such example, see NA, “The hypodermic injection of mercurials in the treatment of syphilis,” Boston Med. Surg. J., 131:246, 1894.

[11] S. Maberly-Smith, “On the treatment of puerperal convulsions by hypodermic injection of morphia,” Lancet, 118:86-87, 1881;  J. D. Eggleston, quoted in “The treatment of puerperal convulsions,” JAMA, 8:295-96, 1887, at 295.

[12] D. H. Cullumore, “Case of traumatic tetanus, treated with the hypodermic injection of atropia; amputation of great toe; recovery,” Lancet, 114:42-43, 1897.

[13] Hardman, “Treatment of choleraic diarrhea,” op. cit.; C. MacDowall, “Hypodermic injections of morphia in cholera,” Lancet, 116:636, 1880.

[14] On the “heroic surgery” of the final decades of the nineteenth century and the exalted status of late-nineteenth-century surgeons, see P. E. Stepansky, Freud, Surgery, and the Surgeons (Hillsdale, NJ: Analytic Press, 1999), pp. 23-34 and passim.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Will It Hurt?

“. . . the children’s population of this century has been submitted progressively as never before to the merciless routine of the ‘cold steel’ of the hypodermic needle.”  —Karl E. Kassowitz, “Psychodynamic Reactions of Children to the Use of Hypodermic Needles” (1958)

Of course, like so much medical technology, injection by hypodermic needle  has a prehistory dating back to the ancient Romans, who used metal syringes with disk plungers for enemas and nasal injections.  Seventeenth- and eighteenth-century physicians extended the sites of entry to the vagina and rectum, using syringes of metal, pewter, ivory, and wood.  Christopher Wren, the Oxford astronomer and architect, introduced intravenous injection in 1657, when he inserted a quill into the patient’s exposed vein and pumped in water, opium, or a purgative (laxative).

But, like so much medical technology, things only get interesting in the nineteenth century.  In the first half of the century, the prehistory of needle injection includes the  work of G. V. Lafargue, a French physician from the commune of St. Emilion.  He treated neuralgic (nerve) pain – his own included – by penetrating the skin with a vaccination lancet dipped in morphine and later by inserting solid morphine pellets under the skin through a large needle hole.  In 1844, the Irish physician Francis Rynd undertook injection by making a small incision in the skin and inserting a fine cannula (tube), letting gravity guide the medication to its intended site.[1]

The leap to a prototype of the modern syringe, in which a glass piston pushes medicine through a metal or glass barrel that ends in a hollow-pointed needle, occurred on two national fronts in 1853.  In Scotland, Alexander Wood,  secretary of Edinburgh’s Royal College of Physicians, injected morphine solution directly into his patients in the hope of dulling their neuralgias.  There was a minor innovation and a major one.  Wood used sherry wine as his solvent, believing it would prove less irritating to the skin than alcohol and less likely to rust his instrument than water.  And then the breakthrough:  He administered the liquid morphine through a piston-equipped syringe that ended in a pointed needle.  Near the end of the needle, on one side, was an  opening through which medicine could be released when an aperture on the outer tube was rotated into alignment with the opening.  It was designed and made by the London instrument maker Daniel Ferguson, whose “elegant little syringes,” as Wood described them, were intended to inject iron percholoride (a blood-clotting agent, or coagulant) into skin lesions and birthmarks in the hope of making them less unsightly.  It never occurred to him that his medicine-releasing, needle-pointed syringes could be used for subcutaneous injection as well.[2]

Across the channel in the French city of Lyon, the veterinary surgeon Charles Pravez employed a piston-driven syringe of his own making to inject iron percholoride into the blood vessels of sheep and horses.  Pravez was not interested in unsightly birthmarks; he was searching for an effective treatment for aneurysms (enlarged arteries, usually due to weakening of the arterial walls) that he thought could be extended to humans.  Wood was the first in print – his “New Method of Treating  Neuralgia by the Direct Application of Opiates to the Painful Points” appeared in the Edinburgh Medical & Surgical Journal in 1855[3] — and, shortly thereafter, he improved Ferguson’s design by devising a hollow needle that could simply be screwed onto the end of the syringe.  Unsurprisingly, then, he has received the lion’s share of credit for “inventing” the modern hypodermic syringe.  Pravez, after all, was only interested in determining whether iron percholoride would clot blood; he never administered medication through his syringe to animals or people.

Wood and followers like the New York physician Benjamin Fordyce Barker, who brought Wood’s technique to Bellevue Hospital in 1856, were convinced that the injected fluid had a local action on inflamed peripheral nerves.  Wood allowed for a secondary effect through absorption into the bloodstream, but believed the local action accounted for the injection’s rapid relief of pain.  It fell to the London surgeon Charles Hunter to stress that the systemic effect of injectable narcotic was primary.  It was not necessary, he argued in 1858, to inject liquid morphine into the most painful spot; the medicine provided the same relief when injected far from the site of the lesion.  It was Hunter, seeking to underscore the originality of his approach to injectable morphine, especially its general therapeutic effect, who introduced the term “hypodermic” from the Greek compound meaning “under the skin.”[4]

It took time for the needle to become integral to doctors and doctoring.  In America, physicians greeted the hypodermic injection with skepticism and even dread, despite the avowals of patients that injectable morphine provided them with instantaneous, well-nigh miraculous relief from chronic pain.[5]  The complicated, time-consuming process of preparing injectable solutions prior to the manufacture of dissolvable tablets in the 1880s didn’t help matters.  Nor did the trial-and-error process of arriving at something like appropriate doses of the solutions.  But most importantly, until the early twentieth century, very few drugs were injectable.  Through the 1870s, the physician’s injectable arsenal consisted of highly poisonous (in pure form) plant alkaloids such as morphine, atropine (belladonna), strychnine, and aconitine, and, by decade’s end, the vasodilator heart medicine nitroglycerine.  The development of local and regional anesthesia in the mid-1880s relied on the hypodermic syringe for subcutaneous injections of cocaine solution, but as late as 1905, only 20 of the 1,039 drugs in the U.S. Pharmacopoeia were injectable.[6]  The availability of injectable insulin in the early 1920s heralded a new, everyday reliance on hypodermic injections, and over the course of the century, the needle, along with the stethoscope, came to stand in for the physician.  Now, of course, needles and doctors “seem to go together,” with the former signifying “the power to heal through hurting” even as it “condenses the notions of active practitioner and passive patient.”[7]

The child’s fear of needles, always a part of pediatric practice, has generated a literature of its own.  In the mid-twentieth century, in the heyday of Freudianism, children’s needle anxiety gave rise to psychodynamic musings.  In 1958, Karl Kassowitz of Milwaukee Children’s Hospital made the stunningly commonsensical observation that younger children were immature and hence more anxious about receiving injections than older children.  By the time kids were eight or nine, he found, most had outgrown their fear.  Among the less than 30% who hadn’t, Kassowitz gravely counseled, continuing resistance to the needle might represent “a clue to an underlying neurosis.”[8]  Ah, the good old Freudian days.

In the second half of the last century, anxiety about receiving injections was “medicalized” like most everything else, and in the more enveloping guise of BII (blood, injection, injury) phobia, found its way into the fourth edition of the American Psychiatric Association’s Diagnostic and Statistical Manual in 1994. Needle phobia thereupon became the beneficiary of all that accompanies medicalization – a specific etiology, physical symptoms, associated EKG and stress hormone changes, and strategies of management.  The latter are impressively varied and range across medical, educational, psychotherapeutic, behavioral, cognitive-behavioral, relaxation, and desensitizing approaches.[9]  Recent literature also highlights the vasovagal reflex associated with needle and blood phobia.  Patients confronted with the needle become so anxious that an initial increase in heart rate and blood pressure is followed by a marked drop,   as a result of which they become sweaty, dizzy, pallid, nauseous (any or all of the above), and sometimes faint (vasovagal syncope).  Another interesting finding is that needle phobia (especially in its BII variant) along with its associated vasovagal reflex probably have a genetic component, as there is a much higher concordance within families for BII phobia than other kinds of phobia. Researchers who study twins put the heritability of BII phobia at around 48%.[10]

Needle phobia is still prevalent among kids, to be sure, but it has long since matured into a fully grown-up condition. Surveys find injection phobia in anywhere from nine to 21% of the the general population and even higher percentages of select populations, such as U.S. college communities.[11]               A study by the Dental Fears Research Clinic of the University of                               Washington in 1995 found that over a quarter of surveyed students and university employees were fearful of dental injections, with 5% admitting they avoided or canceled dental appointments out of fear.[12]  Perhaps some of these needlephobes bear the scars of childhood trauma.  Pediatricians now urge control of the pain associated with venipuncture and intravenous cannulation (tube insertion) in infants, toddlers, and young children, since there is evidence such procedures can have a lasting impact on pain sensitivity and tolerance of needle picks.[13]

But people are not only afraid of needles; they also overvalue them and seek them out.  Needle phobia, whatever its hereditary contribution, is a creation of Western medicine.  The surveys cited above come from the U.S., Canada, and England.  Once we shift our gaze to developing countries of Asia and Africa we behold a different needle-strewn landscape.  Studies attest not only to the high acceptance of the needle but also to its integration into popular understandings of disease.  Lay people in countries such as Indonesia, Tanzania, and Uganda typically want injections; indeed, they often insist on them because injected medicines, which enter the bloodstream directly and (so they believe) remain in the body longer, must be more effective than orally injected pills or liquids.

The strength, rapid action, and body-wide circulation of injectable medicine – these things make injection the only cure for serious disease.[14]  So valued are needles and syringes in developing countries that most lay people, and even Registered Medical Practitioners in India and Nepal, consider it wasteful to discard disposable needles after only a single use.  And then there is the tendency of people in developing countries to rely on lay injectors (the “needle curers” of Uganda; the “injection doctors” of Thailand; the informal providers of India and Turkey) for their shots.  This has led to the indiscriminate use of  penicillin and other chemotherapeutic agents, often injected without attention to sterile procedure.  All of which contributes to the spread of infectious disease and presents a major headache for the World Health Organization.

The pain of the injection?  Bring it on.  In developing countries, the burning sensation that accompanies many injections signifies curative power.  In some cultures, people also welcome the pain as confirmation that real treatment has been given.[15]  In pain there is healing power.  It is the potent sting of modern science brought to bear on serious, often debilitating disease.  All of which suggests the contrasting worldviews and emotional tonalities collapsed into the fearful and hopeful question that frames this essay:  “Will it hurt?”


[1] On the prehistory of hypodermic injection, see D. L. Macht, “The history of intravenous and subcutaneous administration of drugs,” JAMA, 55:856-60, 1916; G. A. Mogey, “Centenary of Hypodermic Injection,” BMJ, 2:1180-85, 1953; N. Howard-Jones, “A critical study of the origins and early development of hypodermic medication,” J. Hist. Med., 2:201-49, 1947 and N. Howard-Jones, “The origins of hypodermic medication,” Scien. Amer., 224:96-102, 1971.

[2] J. B. Blake, “Mr. Ferguson’s hypodermic syringe,” J. Hist. Med., 15: 337-41, 1960.

[3] A. Wood, “New method of treating neuralgia by the direct application of opiates to the painful points,” Edinb. Med. Surg. J., 82:265-81, 1855.

[4] On Hunter’s contribution and his subsequent vitriolic exchanges with Wood over priority, see Howard-Jones, “Critical Study of Development of Hypodermic Medication,” op cit.  Patricia Rosales provides a contextually grounded discussion of the dispute and the committee investigation of Edinburgh’s Royal Medical and Chirurgical Society to which it gave rise.  See P. A. Rosales, A History of the Hypodermic Syringe, 1850s-1920s.  Unpublished doctoral dissertation, Department of the History of Science, Harvard University, 1997, pp. 21-30.

[5] See Rosales, History of Hypodermic Syringe, op. cit., chap. 3, on the early reception of hypodermic injections in America.

[6] G. Lawrence, “The hypodermic syringe,” Lancet, 359:1074, 2002; J. Calatayud & A. Gonsález, “History of the development and evolution of local anesthesia since the coca leaf,” Anesthesiology, 98:1503-08, 2003, at p. 1506; R. E. Kravetz, “Hypodermic syringe,” Am. J. Gastroenterol., 100:2614-15, 2005.

[7] A. Kotwal, “Innovation, diffusion and safety of a medical technology: a review of the literature on injection practices,”  Soc. Sci. Med., 60:1133-47, 2005, at p. 1133.

[8] Kassowitz, “Psychodynamic reactions of children to hypodermic needles,”  op. cit., quoted at p. 257.

[9] Summaries of the various treatment approaches to needle phobia are given in J. G. Hamilton, “Needle phobia:  a neglected diagnosis,” J. Fam. Prac., 41:169-75 ,1995  and H. Willemsen, et al., “Needle phobia in children:  a discussion of aetiology and treatment options, ”Clin. Child Psychol. Psychiatry, 7:609-19, 2002.

[10] Hamilton, “Needle phobia,” op. cit.; S. Torgersen, “The nature and origin of common phobic fears,” Brit. J. Psychiatry, 134:343-51, 1979; L-G. Ost, et al., “Applied tension, exposure in vivo, and tension-only in the treatment of blood phobia,” Behav. Res. Ther., 29:561-74, 1991;  L-G. Ost, “Blood and injection phobia: background and cognitive, physiological, and behavioral variables,” J. Abnorm. Psychol., 101:68-74, 1992.

[11] References to these surveys are provided by Hamilton, “Needle phobia,” op. cit.

[12] On the University of Washington survey, see P. Milgrom, et al., “Four dimensions of fear of dental injections,” J. Am. Dental Assn., 128:756-66, 1997 and T. Kaakko, et al., “Dental fear among university students: implications for pharmacological research,” Anesth. Prog., 45:62-67, 1998.  Lawrence Prouix reported the results of the survey in The Washington Post under the heading “Who’s afraid of the big bad needle?” July 1, 1997, p. 5.

[13] R. M. Kennedy, et al., “Clinical implications of unmanaged need-insertion pain and distress in children,” Pediatrics, 122:S130-S133, 2008.

[14] See Kotwal, “Innovation, diffusion and safety of a medical technology,” op. cit., p. 1136 for references.

[15] S. R. Whyte & S. van der Geest, “Injections: issues and methods for anthropological research,” in N. L. Etkin & M. L. Tan, eds., Medicine, Meanings and Contexts (Quezon City, Philippines: Health Action Information Network, 1994), pp. 137-8.

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Your Tool Touches Me

It is little known that René Laënnec, the Parisian physician who invented the stethoscope at the Necker Hospital in 1816, found it distasteful to place his ear to the patient’s chest.  The distastefulness of “direct auscultation” was compounded by its impracticality in the hospital where, he observed, “it was scarcely to be suggested for most women patients, in some of whom the size of the breasts also posed a physical obstacle.”[1]  The stethoscope, which permitted “mediate auscultation,” not only amplified heart and lung sounds in diagnostically transformative ways; it enabled Laënnec to avoid repugnant  ear to chest contact.

Many women patients of Laënnec’s time and place did not see it that way.  Accustomed to the warmly human pressure of ear on chest, they were uncomfortable when an elongated wooden cylinder was interposed between the two.  By the closing decades of the nineteenth century, of course, the situation was inverted:  The stethoscope, in its modern binaural guise, had become so integral to physical examination that patients  hardly viewed it as a tool at all.  It had become emblematic of hands-on doctoring and, as such, a sensory extender of the doctor.  Even now, the stethoscope virtually stands in for the doctor, especially the generalist or the cardiologist, so that a retiring physician will announce that he is, or will be characterized by others as, hanging up his stethoscope.[2]

It’s easy to argue for the “oneness” of the physician and his or her instruments when it’s a matter of simple tools that amplify sensory endowment  (stethoscopes), provide a hands-on bodily “reading” (of temperature or blood pressure), or elicit a tendon reflex (e.g., the reflex hammer).  And the argument can be extended without much difficulty to the more invasive, high-tech “scopes” used by medical specialists to see what is invisible to the naked eye.  Instruments become so wedded to one or another specialty that it is hard to think of our providers without them.  What is an ophthalmologist without her ophthalmoscope?  An ENT without his nasal speculum?  A gynecologist without her vaginal speculum?  An internist without his blood pressure meter?  Such hand-held devices are diagnostic enablers, and as such they are, or at least ought to be, our friends.

In “Caring Technology” I  suggested that even large-scale technology administered by technicians, and therefore outside the physician’s literal grasp, can be linked in meaningful ways to the physician’s person.  A caring explanation of the need for this or that study, informed by a relational bond, can humanize even the most forbidding high-tech machinery.  To be sure, medical machinery, whatever the discomfort and/or bodily bombardment it entails, is often discomfiting.  But it need be alienating only when we come to it in an alienated state, when it is not an instrument of physicianly engagement but a dehumanized object – a piece of technology.

Critical care nurses, whose work is both technology-laden and technology-driven, have had much to say on the relationship of technology to nursing identity and nursing care.  This literature includes provocative contributions that look at where nurses stand in a hospital hierarchy that comprises staff physicians, residents, students, administrators, patients, and patients’ families.

For some CCU nurses, the use of technology and the acquisition of technological competence segue into issues of power and autonomy and they, in turn, are linked to issues of gender, medical domination, and “ownership” of the technology.[3]  A less feminist sensibility informs interview research that yields unsurprising empirical findings, viz.,  that comfort with technology and the ability to incorporate it into a caring, “touching” disposition hinge on the technological mastery associated with nursing experience.  Student and novice nurses, for example, find the machinery of the CCU anxiety-inducing, even overwhelming.  They resent the casual manner in which physicians relegate to them complex technological tasks, such as weaning patients from respirators, without appreciating the long list of  nursing duties to which such tasks are appended.[4]  Withal, beginners approach the use of technology in task-specific ways and have great difficulty “caring with technology.”[5]   Theirs is not a caring technology but a technology that causes stress and jeopardizes fragile professional identities.

Experienced CCU nurses, on the other hand, achieve a technological competence that lets them pull the machinery to them; they use it as a window of opportunity for being with their patients.[6]   Following Christine Little, we can give the transformation from novice to expert a phenomenological gloss and say that as technological inexperience gives way to technological mastery, technological skills become “ready-to-hand” (Heidegger) and “a natural extension of practice.”[7]

Well and good.  We want critical care nurses comfortable with the machinery of critical care – with cardiac and vital signs monitors, respirators, catheters, and infusion pumps – so that implementing technological interventions and monitoring the monitors do not blot out the nurse’s “presence”  in the patient’s care.   But all this is from the perspective of the nurse and her role in the hospital.  What, one wonders, does the patient make of all this technology?

Humanizing technology means identifying with it in ways that are not only responsive to the patient’s fears but also conducive to a shared appreciation of its role in treatment.  It is easier for patients to feel humanly touched by technology, that is, if their doctors and nurses appropriate it and represent it as an extender of care.  Perhaps some doctors and nurses do so as a matter of course, but one searches the literature in vain for examples of nurse-patient or doctor-patient interactions that humanize technology through dialogue.  And such dialogue, however perfunctory in nature, may greatly matter.

Consider the seriously ill patient whose nurse interacts with him without consideration of the technology-saturated environment in which care is given.  Now consider the seriously ill patient whose nurse incorporates the machinery into his or her caregiving identity, as in “This monitor [or this line or this pump] is a terrific thing for you and for me.  It lets me take better care of you.”  Such reassurance, which can be elaborated in any number of patient-centered ways, is not trivial; it may turn an anxious patient around, psychologically speaking.  And it is all the more important when, owing to the gravity of the patient’s condition, the nurse must spend more time assessing data and tending to machinery than caring for the patient.  Here especially the patient needs to be reminded that the nurse’s responsibility for machinery expands his or her role as the patient’s guardian.[8]

The touch of the physician’s sensory extenders, if literally uncomfortable, may still be comforting.  For it is the physician’s own ears that hear us through the stethoscope and whose own eyes gaze on us through the ophthalmoscope, the laryngoscope, the esophagoscope, the colposcope.  It is easier to appreciate tools as beneficent extenders of care in the safe confines of one’s own doctor’s office, where instrumental touching is fortified by the relational bond that grows out of continuing care.  In the hospital, absent such relational grounding, there is  more room for dissonance and hence more need for shared values and empathy.  A nurse who lets the cardiac monitor pull her away from patient care will not do well with a frightened patient who needs personal caring.  A parturient who welcomes the technology of the labor room will connect better with a labor nurse who values the electronic fetal monitor (and the reassuring visualization it provides the soon-to-be mother) than a nurse who is unhappy with its employment in low-risk births and prefers a return to intermittent auscultation.

In the best of circumstances, tools elicit an intersubjective convergence grounded in an expectation of objectively superior care.  It helps to keep the “objective care” part in mind, to remember that technology was not devised to frighten us, encumber us, or cause us pain,  but to help doctors and nurses evaluate us, keep us stable and comfortable, and enable treatments that will make us better, or at least leave us better off than our technology-free forebears.

My retinologist reclines the examination chair all the way back and begins prepping my left eye for its second intravitreal  injection of Eylea, one of the newest drugs used to treat macular disease.  I am grateful for all the technology that has brought me to this point:  the retinal camera, the slit lamp, the optical coherence tomography machine.  I am especially grateful for the development of fluorescein angiography, which allows my doctor to pinpoint with great precision the lesion in need of treatment.  And of course I am grateful to my retinologist, who brings all this technology to bear with a human touch, calmly reassuring me through every step of evaluation and treatment.

I experienced almost immediate improvement after the first such injection a month earlier and am eager to proceed with the treatment.  So I am relatively relaxed as he douses my eye with antiseptic and anesthetic washes in preparation for the needle.  Then, at the point of injection, he asks me to look up at the face of his assistant, a young woman with a lovely smile.  “My pleasure,” I quip, slipping into gendered mode.  “I love to look at pretty faces.”   I am barely aware of the momentary pressure of the needle that punctures my eyeball and releases this wonderfully effective new drug into the back of my eye.  It is not the needle that administers treatment but my trusted and caring physician.  “Great technique,” I remark.  “I barely felt it.”  To which his young assistant, still standing above me, smiles and adds,  “I think I had something to do with it.”  And indeed she had.


[1] Quoted in J. Duffin, To See with a Better Eye: A Life of R. T. H. Laennec (Princeton: Princeton University Press, 1998), p. 122.

[2] Here are a few recent examples:  O. Samuel, “On hanging up my stethoscope,” BMJ, 312:1426, 1996; “Dr. Van Ausdal hangs up his stethoscope,” YSNews.com, September 26, 2013 (http://ysnews.com/news/2013/09/dr-van-ausdal-hangs-up-his-stethoscope);  “At 90, Gardena doctor is hanging up his stethoscope,” The Daily Breeze, October, 29, 2013 (http://www.dailybreeze.com/general-news/20131029/at-90-gardena-doctor-is-hanging-up-his-stethoscope);  “Well-known doctor hangs up his stethoscope,” Bay Post, February 8, 2014 (http://www.batemansbaypost.com.au/story/1849567/well-known-doctor-hangs-up-his-stethoscope)

[3] See, for example, A. Barnard, “A critical review of the belief that technology is a neutral object and nurses are its master,” J. Advanced Nurs., 26:126-131, 1997; J. Fairman & P. D’Antonio, “Virtual power: gendering the nurse-technology relationship,” Nurs. Inq., 6:178-186, 1999; & B. J. Hoerst & J. Fairman, “Social and professional influences of the technology of electronic fetal monitoring on obstetrical nursing,” Western J. Nurs. Res., 22:475-491, 2000, at pp. 481-82.

[4] C. Crocker & S. Timmons, “The role of technology in critical care nursing,” J. Advanced Nurs., 65:52-61, 2008.

[5] M. McGrath, “The challenges of caring in a technological environment:  critical care nurses’ experiences,” J. Clin. Nurs., 17:1096-1104, 2008.

[6] A. Bernardo, “Technology and true presence in nursing,” Holistic Nurs. Prac., 12:40-49, 1998;  R. C. Locsin,  Technological Competency As Caring in Nursing: A Model For Practice (Indianapolis: Centre Nursing Press, 2005);  McGrath, “The challenges of caring,” op. cit.

[7] C. V. Little, “Technological competence as a fundamental structure of learning in critical care nursing: a phenomenological study,” J. Clin. Nurs., 9:391-399, 2000, at pp. 398, 396.

[8] See E. A. McConnell, “The impact of machines on the work of critical care nurses,” Crit. Care Nurs. Q., 12:45-52, 1990, at p. 51; D. Pelletier , et al., “The impact of the technological care environment on the nursing role, Int. J. Tech. Assess. Health Care, 12:35     8-366, 1996.C

Copyright © 2014 by Paul E. Stepansky.  All rights reserved.

Caring Technology

The critique of contemporary medical treatment as impersonal, uncaring, and disease-focused usually invokes the dehumanizing perils of high technology.  The problem is that high technology is a moving target.  In the England of the 1730s, obstetrical forceps were the high technology of the day; William Smellie, London’s leading obstetrical physician, opposed their use for more than a decade, despite compelling evidence that the technology revolutionized childbirth by permitting obstructed births to become live births.[1]  For much of the nineteenth century, stethoscopes and sphygmomanometers (blood pressure meters) were considered technological contrivances that distanced the doctor from the patient.  For any number of Victorian patients (and doctors too), the kindly ear against the chest and the trained finger on the wrist helped make the physical examination an essentially human encounter.  Interpose instruments between the physician and the patient and, ipso facto, you distance the one from the other.  In late nineteenth-century Britain, “experimental” or “laboratory” medicine was itself a revolutionary technology, and it elicited  bitter denunciation from antivivisectionists (among whom were physicians) that foreshadows contemporary indictments of the “hypertrophied scientism” of modern medicine.[2]

Nineteenth-century concerns about high technology blossomed in the early twentieth century when technologies (urinalysis, blood studies, x-rays, EKGs) multiplied and their use switched to hospital settings.  Older pediatricians opposed the use of the new-fangled incubators for premature newborns. They  not only had faulty ventilation that deprived infants of fresh air but were a wasteful expenditure, given that preemies of the poor were never brought to the hospital right after birth.[3]   Cautionary words were always at hand for the younger generation given to the latest gadgetry.  At the dedication of Yale’s Sterling Hall of Medicine, the neurosurgeon Harvey Cushing extolled family physicians as exemplars of his gospel of observation and deduction and urged  Yale students to engage in actual “house-to-house practice” without the benefit of “all of the paraphernalia and instruments of precision supposed to be necessary for a diagnosis.”  This was in 1925.[4]

Concerns about the impact of technology on doctor-patient relationships blossomed again in the 1960s and 70s and played a role  in the rebirth of primary care medicine in the guise of the “family practice movement.”  Reading the papers of the recently deceased G. Gayle Stephens, written at the time and collected in his volume The Intellectual Basis of Family Practice (1982), is a strong reminder of the risks attendant to loading high technology with relational meaning.  Stephens, an architect of the new structure of primary care training, saw the “generalist role in medicine” as an aspect of 70s counterculture that questioned an “unconditional faith in science” that extended to medical training, practice, and values.  And so he aligned the family practice movement with other social movements of the 70s that sought to put the breaks on scientism run rampant:  agrarianism, utopianism, humanism, consumerism, and feminism.  With its clinical focus on the whole person and liberal borrowings from psychiatry and the behavioral sciences, family practice set out to liberate medicine from its “captivity” to a flawed view of reality that was mechanistic, protoplasmic, and molecular.[5]

Technology was deeply implicated in Stephens’ critique, even though he failed to stipulate which technologies he had in mind.  His was a global indictment: Medicine’s obsession with its “technological legerdemain” blinded the physician to the rich phenomenology of “dis-ease” and, as such, was anti-Hippocratic.  For Stephens, the “mechanical appurtenances of healing” had to be differentiated from the “essential ingredient” of the healing process, viz., “a physician who really cares about the patient.” “We have reached a point of diminishing returns in the effectiveness of technology to improve the total health of our nation.”  So he opined in 1973, only two years after the first crude CT scanner was demonstrated in London and long before the development of MRIs and PET scans, of angioplasty with stents, and of the broad array of laser- and computer-assisted operations available to contemporary surgeons.[6]  Entire domains of technologically guided intervention – consider technologies of blood and marrow transplantation and medical genetics – barely existed in the early 70s.  Robotics was the stuff of science fiction.

It is easy to sympathize with both Stephens’ critique and his mounting skepticism about the family practice movement’s ability to realize its goals. [7]  He placed the movement on an ideological battleground in which the combatants were of unequal strength and numbers.  There was the family practice counterculture, with the guiding belief that “something genuine and vital occurs in the meeting of doctor and patient” and the pedagogical correlate that  “A preoccupation with a disease instead of a person is detrimental to good medicine.”  And then there were the forces of organized medicine, of medical schools, of turf-protecting internists and surgeons, of hospitals with their “business-industrial models” of healthcare delivery, of specialization and of technology – all bound together by a cultural commitment to science and its  “reductionist hypothesis about the nature of reality.”[8]

Perceptive and humane as Stephen’s critique was, it fell back on the very sort of reductionism he imputed to the opponents of family practice.  Again and again, he juxtaposed “high technology,” in all its allure (and allegedly diminishing returns) with the humanistic goals of patient care.  But are technology and humane patient care really so antipodal?  Technology in and of itself has no ontological status within medicine.  It promotes neither a mechanistic worldview that precludes holistic understanding of patients as people nor a humanizing of the doctor-patient encounter.  In fact, technology is utterly neutral with respect to the values that inform medical practice and shape individual doctor-patient relationships.  Technology does not make (or unmake) the doctor.  It no doubt affects the physician’s choice of specialty, pulling those who lack doctoring instincts or people skills in problem-solving directions (think diagnostic radiology or pathology). But this is hardly a bad thing.

For Stephens, who struggled to formulate an “intellectual” defense of family practice as a new medical discipline, technology was an easy target.  Infusing the nascent behavioral medicine of his day with a liberal dose of sociology and psychoanalysis, he envisioned the family practice movement as a vehicle for recapturing “diseases of the self” through dialogue.[9]  To the extent that technology – whose very existence all but guaranteed its overuse – supplanted  the sensibility (and associated communicational skills) that enabled such dialogue, it was ipso facto part of the problem.

Now there is no question that overreliance on technology, teamed with epistemic assurance that technology invariably determines what is best, can make a mess of things, interpersonally speaking.  But is the problem with the technology or with the human beings who use it?  Technology, however “high” or “low,” is an instrument of diagnosis and treatment, not a signpost of treatment well- or ill-rendered.  Physicians who are not patient-centered will assuredly not find themselves pulled toward doctor-patient dialogue through the tools of their specialty.  But neither will they become less patient-centered on account of these tools.  Physicians who are patient-centered, who enjoy their patients as people, and who comprehend their physicianly responsibilities in broader Hippocratic terms – these physicians will not be rendered less human, less caring, less dialogic, because of the technology they rely on.  On the contrary, their caregiving values, if deeply held, will suffuse the technology and humanize its deployment in patient-centered ways.

When my retinologist examines the back of my eyes with the high-tech tools of his specialty – a retinal camera, a slit lamp, an optical coherence tomography machine – I do not feel that my connection with him is depersonalized or objectified through the instrumentation.  Not in the least.  On the contrary, I perceive the technology as an extension of his person.  I am his patient, I have retinal pathology, and I need his regular reassurance that my condition remains stable and that I can continue with my work.  He is responsive to my anxiety and sees me whenever I need to see him.  The high technology he deploys in evaluating the back of my eye does not come between us; it is a mechanical extension of his physicianly gaze that fortifies his judgment and amplifies the reassurance he is able to provide.  Because he cares for me, his technology cares for me.  It is caring technology because he is a caring physician.

Modern retinology is something of a technological tour de force, but it is no different in kind from other specialties that employ colposcopes, cytoscopes, gastroscopes, proctoscopes, rhinoscopes, and the like to investigate symptoms and make diagnoses.  If the physician who employs the technology is caring, then all such technological invasions, however unpleasant, are caring interventions.  The cardiologist who recommends an invasive procedure like cardiac catheterization is no less caring on that account; such high technology does not distance him from the patient, though it may well enable him to maintain the distance that already exists.  It is a matter of personality, not technology.

I extend this claim to advanced imaging studies as well.  When the need for an MRI is explained in a caring and comprehensible manner, when the explanation is enveloped in a trusting doctor-patient relationship, then the technology, however discomfiting, becomes the physician’s collaborator in care-giving.  This is altogether different from the patient who demands an MRI or the physician who, in the throes of defensive medicine, remarks off-handedly, “Well, we better get an MRI” or simply, “I’m going to order an MRI.”

Medical technology, at its best, is the problem-solving equivalent of a prosthetic limb.  It is an inanimate extender of the physician’s mental “grasp” of the problem at hand. To the extent that technology remains tethered to the physician’s caring sensibility, to his understanding that his diagnostic or treatment-related problem is our existential problem – and that, per Kierkegaard, we are often fraught with fear and trembling on account of it – then we may welcome the embrace of high technology, just as polio patients of the 1930s and 40s with paralyzed intercostal muscles welcomed the literal embrace of the iron lung, which enabled them to breath fully and deeply and without pain.

No doubt, many physicians fail to comprehend their use of technology in this fuzzy, humanistic way – and we are probably the worse for it.  Technology does not structure interpersonal relationships; it is simply there for the using or abusing.  The problem is not that we have too much of it, but that we impute a kind of relational valence to it, as if otherwise caring doctors are pulled away from patient care because technology gets between them and their patients.  With some doctors, this may indeed be the case.  But it is not the press of technology per se that reduces physicians to, in a word Stephens disparagingly uses, “technologists.”  The problem is not in their tools but in themselves.


[1] A. Wilson, The Making of Man-Midwifery: Childbirth in England, 1660-1770 (Cambridge: Harvard, 1995), pp. 97-98, 127-128.

[2] R. D. French, Antivivisection and Medical Science in Victorian Society (Princeton:  Princeton University Press, 1975), p. 411.

[3] J. P. Baker, “The Incubator Controversy: Pediatricians and the Origins of Premature Infant Technology in the United States, 1890 to 1910,” Pediatrics, 87:654-662, 1991.

[4] E. H. Thomson, Harvey Cushing: Surgeon, Author, Artist (NY: Schuman, 1950), pp. 244-45.

[5] G. G. Stephens, The Intellectual Basis of Family Practice (Kansas City: Winter, 1982), pp. 62, 56, 83-85, 135-39.

[6] Stephens, Intellectual Basis of Family Practice, pp. 84, 191, 64, 39, 28.

[7] E.g., Stephens, Intellectual Basis of Family Practice, pp. 96, 194.  Cf. his comment on the American College of Surgeon’s effort to keep FPs out of the hospital: “There are issues of political hegemony masquerading as quality of patient care, medicolegal issues disguised as professional qualifications, and economic wolves in the sheepskins of guardians of the public safety” (p. 69).

[8] Stephens, Intellectual Basis of Family Practice, pp. 23, 38, 22.  In 1978, he spoke of the incursion of family practice  into the medical school curriculum of the early 70s as an assault on an entrenched power base:  “The medical education establishment has proved to be a tough opponent, with weapons we never dreamed of. . . .We had to deal with strong emotions, hostility, anger, humiliation. Our very existence was a judgment on the schools, much in the same way that civil rights demonstrators were a judgment on the establishment.  We identified ourselves with all the natural critics of the schools – students, underserved segments of the public, and their elected representatives – to bring pressure to bear on the schools to create academic units devoted to family practice” (pp. 184, 187).

[9] Stephens, Intellectual Basis of Family Practice, pp. 94, 105, 120-23, 192.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

The Costs of Medical Progress

When historians of medicine introduce students to the transformation of acute, life-threatening, often terminal illness into long-term, manageable, chronic illness – a major aspect of 20th-century medicine – they immediately turn to diabetes.  There is Diabetes B.I. (diabetes before insulin) and diabetes in the Common Era, i.e., Diabetes A.I. (diabetes after insulin).  Before Frederick Banting, who knew next to nothing about the complex pathophysiology of diabetes, isolated insulin in his Toronto laboratory in 1922, juvenile diabetes was a death sentence; its young victims were consigned to starvation diets and early deaths.  Now, in the Common Era, young diabetics grow into mature diabetics and type II diabetics live to become old diabetics.  Life-long management of what has become a chronic disease will take them through a dizzying array of testing supplies, meters, pumps, and short- and long-term insulins.  It will also put them at risk for the onerous sequelae of long-term diabetes:  kidney failure, neuropathy, retinopathy, and amputation of lower extremities.  Of course all the associated conditions of adult diabetes can be managed more or less well, with their own technologically driven treatments (e.g., hemodialysis for kidney failure) and long-term medications.

The chronicity of diabetes is both a blessing and curse.  Chris Feudtner, the author of the outstanding study of its transformation, characterizes it as a “cyclical transmuted disease” that no longer has a stable “natural” history. “Defying any simple synopsis,” he writes, “the metamorphosis of diabetes wrought by insulin, like a Greek myth of rebirth turned ironic and macabre, has led patients to fates both blessed and baleful.”[1]  He simply means that what he terms the “miraculous therapy” of insulin only prolongs life at the expense of serious long-term problems that did not exist, that could not exist, before the availability of insulin.  So depending on the patient, insulin signifies a partial victory or a foredoomed victory, but even in the best of cases, to borrow the title of Feudtner’s book, a victory that is “bittersweet.”

It is the same story whenever new technologies and new medications override an otherwise grim prognosis.  Beginning in the early 1930s, we put polio patients (many of whom were kids) with paralyzed intercostal muscles of the diaphragm into the newly invented Iron Lung.[2]  The machine’s electrically driven blowers created negative pressure inside the tank that made the kids breathe.  They could relax and stop struggling for air, though they required intensive, around-the-clock nursing care.[3]  Many survived but spent months or years, occasionally even lifetimes, in Iron Lungs.  Most regained enough lung capacity to leave their steel tombs (or were they nurturing wombs?) and graduated to a panoply of mechanical polio aids: wheelchairs, braces, and crutches galore.  An industry of rehab facilities (like FDR’s fabled Warm Springs Resort in Georgia) sprouted up to help patients regain as much function as possible.

Beginning in 1941, the National Foundation for Infantile Paralysis (NFIP), founded by FDR and his friend Basil O’Connor in 1937, footed the bill for the manufacture of Iron Lungs and then distributed them via regional centers to communities where they were needed.   The Lungs, it turned out, were foundation-affordable devices, and it was unseemly, even Un-American, to worry about the cost of hospitalization and nursing care for the predominantly young, middle-class white patients who temporarily resided in them, still less about the costs of post-Iron Lung mechanical appliances and rehab personnel that helped get them back on their feet.[4]  To be sure, African American polio victims were unwelcome at tony resort-like facilities like Warm Springs, but the NFIP, awash in largesse, made a grant of $161,350 to Tuskegee Institute’s Hospital so that it could build and equip its own 35-bed “infantile paralysis center for Negroes.”[5]

Things got financially dicey for the NFIP only when Iron Lung success stories, disseminated through print media, led to overuse.  Parents read the stories and implored doctors to give their stricken children the benefit of this life-saving invention – even when their children had a form of polio (usually bulbar polio) in the face of which the mechanical marvel was useless.  And what pediatrician, moved by the desperation of loving parents beholding a child gasping for breath, would deny them the small peace afforded by use of the machine and the around-the-clock nursing care it entailed?

The cost of medical progress is rarely the cost of this or that technology for this or that disease.  No, the cost corresponds to cascading “chronicities” that pull multiple technologies and treatment regimens into one gigantic flow.  We see this development clearly in the development and refinement of hemodialysis for kidney failure.  Dialysis machines only became life-extenders in 1960, when Belding Scribner, working at the University of Washington Medical School, perfected the design of a surgically implanted Teflon cannula and  shunt through which the machine’s tubing could be attached, week after week, month after month, year after year.  But throughout the 60s, dialysis machines were in such short supply that treatment had to be rationed:  Local medical societies and medical centers formed “Who Shall Live” committees to decide who would receive dialysis and who not.  Public uproar followed, fanned by the newly formed National Association of Patients on Hemodialysis, most of whose members, be it noted, were white, educated, professional men.

In 1972, Congress responded to the pressure and decided to fund all treatment for end-stage renal disease (ESRD) through Section 2991 of the Social Security Act.  Dialysis, after all, was envisioned as long-term treatment for only a handful of appropriate patients, and in 1973 only 10,000 people received the treatment at a government cost of $229 million.  But things did not go as planned.  In 1990, the 10,000 had grown to 150,000 and their treatment cost the government $3 billion.  And in 2011, the 150,000 had grown to 400,000 people and drained the Social Security Fund of $20 billion.

What happened?  Medical progress happened.  Dialysis technology was not static; it was refined and became available to sicker, more debilitated patients who encompassed an ever-broadening socioeconomic swath of the population with ESRD.  Improved cardiac care, drawing on its own innovative technologies, enabled cardiac patients to live long enough to go into kidney failure and receive dialysis.  Ditto for diabetes, where improved long-term management extended the diabetic lifespan to the stage of kidney failure and dialysis.  The result:  Dialysis became mainstream and its costs  spiraled onward and upward.  A second booster engine propelled dialysis-related healthcare costs still higher, as ESRD patients now lived long enough to become cardiac patients and/or insulin-dependent diabetics, with the costs attendant to managing those chronic conditions.

With the shift to chronic disease, the historian Charles Rosenberg has observed, “we no longer die of old age but of a chronic disease that has been managed for years or decades and runs its course.”[6] To which I add a critical proviso:  Chronic disease rarely runs its course in glorious pathophysiological isolation.  All but inevitably, it pulls other chronic diseases into the running.  Newly emergent chronic disease is collateral damage attendant to chronic disease long-established and well-managed.  Chronicities cluster; discrete treatment technologies leach together; medication needs multiply.

This claim does not minimize the inordinate impact – physical, emotional, and financial – of a single disease.  Look at AIDS/HIV, a “single” entity that brings into its orbit all the derivative illnesses associated with “wasting disease.”  But the larger historical dynamic is at work even with AIDS.  If you live with the retrovirus, you are at much greater risk of contracting TB, since the very immune cells destroyed by the virus enable the body to fight the TB bacterium.  So we behold a resurgence of TB, especially in developing nations, because of HIV infection.[7]  And because AIDS/HIV is increasingly a chronic condition, we need to treat disproportionate numbers of HIV-infected patients for TB.  They have become AIDS/HIV patients and TB patients.  Worldwide, TB is the leading cause of death among persons with HIV infection.

Here in microcosm is one aspect of our health care crisis.  Viewed historically, it is a crisis of success that corresponds to a superabundance of long-term multi-disease management tools and ever-increasing clinical skill in devising and implementing complicated multidrug regimens.  We cannot escape the crisis brought on by these developments, nor should we want to.  The crisis, after all, is the financial result of a century and a half of life-extending medical progress.  We cannot go backwards.  How then do we go forward?  The key rests in the qualifier one aspect.  American health care is organismic; it is  a huge octopus with specialized tentacles that simultaneously sustain and toxify different levels of the system.  To remediate the financial crisis we must range across these levels in search of more radical systemic solutions.


[1]C. Feudtner, Bittersweet: Diabetes, Insulin, and the Transformation of Illness (Chapel Hill: University of North Carolina Press, 2003), p. 36.

[2] My remarks on the development and impact of the Iron Lung and homodialysis, respectively, lean on D. J. Rothman, Beginnings Count: The Technological Imperative in American Health Care (NY: Oxford University Press, 1997). For an unsettling account of the historical circumstances and market forces that have undermined the promise of dialysis in America, see Robin Fields, “’God help you. You’re on dialysis’,” The Atlantic, 306:82-92, December, 2010. The article is online at   http://www.theatlantic.com/magazine/archive/2010/12/-8220-god-help-you-you-39-re-on-dialysis-8221/8308/.

[3] L. M. Dunphy, “’The Steel Cocoon’: Tales of the Nurses and Patients of the Iron Lung, 1929-1955,” Nursing History Review, 9:3-33, 2001.

[4] D. J. Wilson, “Braces, Wheelchairs, and Iron Lungs: The Paralyzed Body and the Machinery of Rehabilitation in the Polio Epidemics,” Journal of Medical Humanities, 26:173-190, 2005.

[5] See S. E. Mawdsley, “’Dancing on Eggs’: Charles H. Bynum, Racial Politics, and the National Foundation for Infantile Paralysis, 1938-1954,” Bull. Hist. Med., 84:217-247, 2010.

[6] C. Rosenberg, “The Art of Medicine: Managed Fear,” Lancet, 373:802-803, 2009.  Quoted at p. 803.

[7] F. Ryan, The Forgotten Plague: How the Battle Against Tuberculosis was Won and Lost  (Boston:  Little, Brown, 1992), pp. 395-398, 401, 417.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

Medical Toys, Old and New

“The plethora of tests available to the young clinician has significantly eroded the skills necessary to obtain adequate histories and careful physical examinations.  Day in and day out, I encounter egregious examples of misdiagnosis engendered by inadequacies in these skills.”                                ~William Silen, M.D. “The Case for Paying Closer Attention to Our Patients” (1996)

Treat the Patient, Not the CT Scan,” adjures Abraham Verghese in a New York Times op-ed piece of February 26, 2011.  Verghese targets American medicine’s overreliance on imaging tests, but, like others before him, he is really addressing the mindset that fosters such overreliance.  Preclinical medical students, he reminds us, all learn physical examination and diagnosis, but their introduction to the art dissipates under the weight of diagnostic tests and specialist procedures during their clinical years.  “Then,” he writes, “they discover that the currency on the ward seems to be ‘throughput’ – getting tests ordered and getting results, having procedures like colonoscopies done expeditiously, calling in specialists, arranging discharge.”  In the early 90s, William Silen, Harvard’s Johnson and Johnson Distinguished Professor of Surgery,[1] made the same point with greater verve.  In one of his wonderful unpublished pieces, “Lumps and Bumps,” he remarked that “the modern medical student, and most physicians, have been so far removed from physical diagnosis, that they simply do not accept that a mass is a mass is a mass unless the CT scan or ultrasound tells them it is there.”

Verghese and Silen get no argument from me on the clinical limitations and human failings associated with technology-driven medicine.  But these concerns are hardly unique to an era of CT scans and MRIs.  There is a long history of concern about overreliance on new technologies;  Silen has a delightfully pithy, unpublished piece on the topic that is simply titled, “New Toys.”

One limitation of such critiques is the failure to recognize that all “toys” are not created equal.  Some new toys become old toys, at which point they cease being toys altogether and simply become part of the armamentarium that the physician brings to the task of physical examination and diagnosis.  For example, we have long since stopped thinking of x-ray units, EKG machines, blood pressure meters (i.e., sphygmomanometers), and stethoscopes as “new toys” that militate against the acquisition of hands-on clinical skill.

But it was not always so.  When x-rays became available in 1896, clinical surgeons were aghast.  What kind of images were these?  Surely not photographic images in the reliably objectivistic late-nineteenth century sense of the term.  The images were wavy, blurry, and imprecise, vulnerable to changes in the relative location of the camera, the x-ray tube, and the object under investigation.  That such monstrously opaque images might count as illustrative evidence in courts of law, that they might actually be turned against the surgeon and his “expert opinion”  – what was the world coming to?  Military surgeons quickly saw the usefulness of x-rays for locating bullets and shrapnel, but their civilian colleagues remained suspicious of the new technology for a decade or more after its invention.  No fools, they resorted to x-rays only when they felt threatened by malpractice suits.

Well before the unsettling advent of x-ray photography, post-Civil War physician-educators were greatly concerned about the use of mechanical pulse-reading instruments.  These ingenious devices, so they held, would discourage young physicians from learning to appreciate the subtle diagnostic indicators embedded in the pulse.  And absent such appreciation, which came only from prolonged training of their fingertips, they could never acquire the diagnostic acumen of their seniors, much less the great pulse readers of the day.

Thus they cautioned students and young colleagues to avoid the instruments.  It was only through “the habit of discriminating pulses instinctively” that the physician acquired  “valuable truths . . . which he can apply to practice.”  So inveighed the pioneering British physiologist John Burdon-Sanderson in 1867.  His judgment was shared by a generation of senior British and American clinicians for whom the trained finger remained a more reliable measure of radial pulse than the sphygmograph’s arcane tracings.  In The Pulse, his manual of 1890, William Broadbent cautioned his readers to avoid the sphygmograph, since interpretation of its tracings could “twist facts in the desired direction.”  Physicians should “eschew instrumental aids and educate the finger,” echoed Graham Steell in The Use of the Sphygmograph in Medicine at the century’s close.[2]

Lower still on the totem pole of medical technology, indeed about as low down as one can get – is the stethoscope, “invented” by René Laennec in 1816 and first employed by him in the wards of Paris’s Hôpital Necker (see sidebar).  In 1898, James Mackenzie, the founder of modern cardiology, relied on the stethoscope, used in conjunction with his own refinement of the Dudgeon sphygmograph of 1881 (i.e., the Mackenzie polygraph of 1892), to identify what we now term atrial fibrillation.  In the years to follow, Mackenzie, a master of instrumentation, became the principal exponent of what historians refer to as the “new cardiology.” His “New Methods of Studying Affections of the Heart,” a series of articles published in the British Medical Journal in 1905, signaled a revolution in understanding cardiac function.  “No man,” remarked his first biographer, R. McNair Wilson, in 1926, “ever used a stethoscope with a higher degree of expertness.”  And yet this same Mackenzie lambasted the stethoscope as the instrument that had “not only for one hundred years hampered the progress of knowledge of heart affections, but had done more harm than good, in that many people had had the tenor of their lives altered, had been forbidden to undertake duties for which they were perfectly competent, and had been subject to unnecessary treatment because of its findings’.”[3]

Why did Mackenzie come to feel this way?  The problem with the stethoscope was that the auscultatory sounds it “discovered,” while diagnostically illuminating, could cloud clinical judgment and lead to unnecessary treatments, including draconian restrictions of lifestyle.  For Mackenzie,  sphygmomanometers were essentially educational aids that would corroborate what medical students were learning to discern through their senses.  And, of course, he allowed for the importance of such gadgetry in research.  His final refinment of pulse-reading instrumentation, the ink jet polygraph of 1902 (see sidebar), was just such a tool.  But it was never intended for generalists, whose education of the senses was expected to be adequate to the meaning of heart sounds.  Nor was Mackenzie a fan of the EKG, when it found its way into hospitals after 1905.  He perceived it as yet another “new toy” that provided no more diagnostic information than the stethoscope and ink jet polygraph.  And for at least the first 15 years of the machine’s use, he was right.

Now, of course, the stethoscope, the sphygmomanometer, and, for adults of a certain age, the EKG machine are integral to the devalued art of physical examination.  Critics who bemoan the overuse of CT scans and MRIs, of echocardiography and angiography, would be happy indeed  if medical students and residents spent more time examining patients and learning all that can be learned from stethoscopes, blood pressure monitoring, and baseline EKGs.  But more than a century ago these instrumental prerequisites of physical examination and diagnosis were themselves new toys, and educators were wary of what medical students would lose by relying on them at the expense of educating their senses.  Now educators worry about what students lose by not relying on them.

Toys aside, I too hope  that those elements of physical diagnosis that fall back on one tool of exquisite sensitivity – the human hand – will not be lost among reams of lab results and diagnostic studies.  One shudders at the thought of a clinical medicine utterly bereft of the laying on of hands, which is not only an instrument of diagnosis but also an amplifier of therapy.  The great pulse readers of the late nineteenth century are long gone and of interest only to a handful of medical historians.  Will the same be true, a century hence, of the great palpators of the late twentieth?


[1] I worked as Dr. Silen’s editor in 2000-2001, during which time I was privileged to read his unpublished lectures, addresses, and general-interest medical essays as preparation for helping him organize his memoirs.  Sadly, the memoirs project never materialized.

[2] In this paragraph, I am guided especially by two exemplary studies, Christopher Lawrence, “Incommunicable Knowledge: Science, Technology and the Clinical Art in Britain, 1850-1914,” J. Contemp. Hist., 20:503-520, 1985 and Hughes Evans, “Losing Touch: The Controversy Over the Introduction of Blood Pressure Instruments in Medicine, “ Tech. Cult., 34:784-807, 1993.  Broadbent and Steell are quoted from Lawrence, p. 516.

[3] R. McNair Wilson, The Beloved Physician: Sir James Mackenzie (New York:  Macmillan, 1926), pp. 103-104. A more recent, detailed account of Mackenzie’s life and career is Alex Mair, Sir James Mackenzie, M.D., 1853-1925 – General Practitioner (London: Royal College of General Practitioners, 1986).

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.