Yearly Archives: 2012

Empathy, Psychotherapy, Medicine

What passes for psychoanalysis in America these days is a far cry from the psychoanalysis Freud devised in the early years of the last century.  A sea change began in the 1970s, when Heinz Kohut, a Vienna-born and Chicago-based psychoanalyst, developed what he termed “psychoanalytic self psychology.”  At the core of Kohut’s theorizing was the replacement of one kind of psychoanalytic method with another.   Freud’s method – which Freud himself employed imperfectly at best – revolved around the coolly self-possessed analyst, who, with surgeon-like detachment, processed the patient’s free associations with “evenly hovering attention” and offered back pearls of interpretive wisdom.  The analyst’s neutrality – his unwillingness to become a “real” person who related to the patient in conventionally sympathetic and supportive ways – rendered him a “blank screen” that elicited the same feelings of love and desire – and also of fear, envy, resentment, and hatred – as the mother and father of the patient’s early life.  These feelings clustered into what Freud termed the positive and negative transferences.

Kohut, however, found this traditional psychoanalytic method fraught with peril for patients burdened less with Freudian-type neurotic conflicts than with psychological deficits of a preoedipal nature.  These deficits gained expression in more primitive types of psychopathology, especially in what he famously termed  “narcissistic personality disorder.”  For these patients – and eventually, in Kohut’s mind, for all patients – the detached, emotionally unresponsive analyst simply compounded the feelings of rejection and lack of self-worth that brought the patient to treatment.  He proffered in its place a kinder, gentler psychoanalytic method in which the analyst was content to listen to the patient for extended periods of time, to affirm and mirror back what the patient was saying and feeling, and over time to forge an empathic bond from which interpretations would arise.

Following Kohut, empathy has been widely construed as an aspect, or at least  a precondition, of talking therapy.  For self psychologists and others who draw on Kohut’s insights, the ability to sympathize with the patient has given way to a higher-order ability to feel what the patient is feeling, to “feel with” the patient from the inside out.  And this process of empathic immersion, in turn, permits the therapist to “observe” the patient’s psychological interior and to comprehend the patient’s “complex mental states.”  For Kohut, the core of psychoanalysis, indeed of depth-psychology in general, was employment of this “empathic mode of observation,” an evocative but semantically questionable turn of phrase, given the visual referent of “observe,” which comes from the Latin observare (to watch over, to guard).   More counterintuitively still, he sought to cloak the empathic listening posture in scientific objectivism.  His writings refer over and over to the “data” that analysts acquire through their deployment of “scientific” empathy, i.e., through their empathic listening instrument.

I was Heinz Kohut’s personal editor from 1978 until his death in the fall of 1981.  Shortly after his death, I was given a dictated transcript from which I prepared his final book, How Does Analysis Cure?, for posthumous publication.  Throughout the 80s and into the 90s, I served as editor to many senior self psychologists, helping them frame their arguments about empathy and psychoanalytic method  and write their papers and books.  I grasped then, as I do now, the heuristic value of a stress on therapeutic empathy as a counterpoise to traditional notions of analytic neutrality, which gained expression, especially in the decades following World War II, in popular stereotypes of the tranquilly “analytic” analyst whose caring instincts were no match for his or her devotion to Freud’s rigid method.

The comparative perspective tempers bemusement at what would otherwise be a colossal conceit:  that psychoanalytic psychotherapists alone, by virtue of their training and work, acquire the ability to empathize with their patients.  I have yet to read an article or book that persuaded me that  empathy can be taught, or that the yield of therapeutic empathy is the apprehension of “complex psychological states” that are analogous to the “data” gathered and analyzed by bench scientists (Kohut’s own analogy).

I do believe that empathy can be cultivated, but only in those who are adequately empathic to begin with.  In medical, psychiatric, and psychotherapy training, one can present students with instances of patients clinically misunderstood and then suggest how one might have understood them better, i.e., more empathically.  Being exhorted by teachers to bracket one’s personal biases and predispositions in order to “hear” the patient with less adulterated ears is no doubt a good thing.  But it  assumes trainees can develop a psychological sensibility through force of injunction, which runs something like:  “Stop listening through the filter of your personal biases and theoretical preconceptions!  Listen to what the patient herself  is saying in her voice!  Utilize what you understand of yourself, viz., the hard-won fruits of your own psychotherapy (or training analysis), to put yourself in her place!  Make trial identifications so that her story and her predicament resonate with aspects of your story and your predicament; this will help you feel your way into her inner world.”

At a less hortatory level, one can provide trainees with teachers and supervisors who are sensitive, receptive listeners themselves and thus “skilled” at what self psychologists like to refer to as “empathic attunement.”  When students listen to such instructors and perhaps observe them working with patients, they may learn to appreciate the importance of empathic listening and then, in their own work, reflect more ongoingly on what their patients are saying and on how they are hearing them say it.  They acquire the ability for “reflection-in-action,” which Donald Schön, in two underappreciated books of the 1980s, made central to the work of “reflective professionals” in a number of fields, psychotherapy among them.[1]  To a certain extent, systematic reflection in the service of empathy may help therapists be more empathic in general.

But then the same may be said of any person who undergoes a transformative life experience (even, say, a successful therapy) in which he learns to understand differently – and less tendentiously – parents, siblings, spouses, children, friends, colleagues, and the like.  Life-changing events  — fighting in  wars, losing loved ones, being victimized by natural disasters, living in third-world countries, providing aid to trauma victims – cause some people to recalibrate values and priorities and adopt new goals.  Such decentering can mobilize an empathic sensibility, so that individuals return to their everyday worlds with less self-centered ways of perceiving and being with others.

There is nothing privileged about psychotherapy training in acquiring an empathic sensibility.  I once asked a senior self psychologist what exactly differentiated psychoanalytic empathy from empathy in its everyday sense.  He thought for a moment and replied that in psychoanalysis, one deploys “sustained” empathy.  What, pray tell, does this mean, beyond denoting the fact that psychoanalysts, whether or not empathic, listen to patients for a living, and the units of such listening are typically 45-minute sessions.  Maybe he simply meant that, in the nature of things, analysts must try to listen empathically for longer periods of time, and prolongation  conduces to empathic competence.

Well, anything’s possible, I suppose.  But the fact remains that some people are born empathizers and others not.  Over the course of a 27-year career in psychoanalytic and psychiatric publishing, I worked with a great many analysts and therapists who struck me as unempathic, sometimes stunningly unempathic.  And those who struck me as empathic were not aligned with any particular school of thought, certainly not one that, like self psychology, privileges empathy.

Nor is it self-evident  that the empathy-promoting circumstances of psychotherapy are greater than the circumstances faced day-in and day-out by any number of physicians. Consider adult and pediatric oncologists, transplant surgeons, and internists and gerontologists who specialize in palliative care.  These physicians deal with patients (and their parents and children) in extremis; surely their work should elicit “sustained empathy,” assuming they begin with an empathic endowment strong enough to cordon off the miasma of uncertainty, dread, and imminent loss that envelops them on daily rounds.  Consider at the other end of the medical spectrum those remaining family doctors  who, typically in rural settings, provide intergenerational, multispecialty care and continue to treat patients in their homes .  The nature of their work makes it difficult for them not to observe and comprehend their patients’ complex biopsychosocial states; there are extraordinary empathizers among them.

When it comes to techniques for heightening empathy, physicians have certain advantages over psychotherapists, since their patients present with bodily symptoms and receive bodily (often procedural) interventions, both of which have a mimetic potential beyond “listening” one’s way into another’s inner world.  There is more to say about the grounds of medical empathy, but let me close here with a concrete illustration of such empathy in the making.

William Stevenson Baer graduated from Johns Hopkins Medical College in 1898 and stayed on at Hopkins as an intern and then assistant resident in William Halsted’s dauntingly rigorous surgical training program.  In June, 1900, at the suggestion of Baer’s immediate supervisor, Harvey Cushing, Halsted asked Baer to establish an orthopedic outpatient clinic at Hopkins the following fall.  With no grounding in the specialty, Baer readied himself for his new task by spending the ensuing summer at the orthopedic services of Massachusetts General Hospital and the Boston Children’s Hospital.  At both institutions, many children in the orthopedic ward had to wear plaster casts throughout the hot summer months.  On arrival, Baer’s first order of business was to alter his life circumstances in order to promote empathy with, and win the trust of, these young patients.  To wit, he had himself fitted for a body cast that he wore the entire summer.  His sole object, according to his Hopkins colleague Samuel Crowe, was “to gain the children’s confidence by showing them that he too was enduring the same discomfort.”[2]

Psychotherapists are generally satisfied that empathy can be acquired in the manner of a thought experiment.  “Bracket your biases and assumptions,” they admonish, “empty yourself of ‘content,’ and then, through a process of imaginative identification, you will be able to hear what your patient is saying and feel what she is feeling.”  Baer’s example reminds us that illness and treatment are first and foremost bodily experiences, and that “feeling into another” – the literal meaning of the German Einfühlung, which we translate as “empathy” – does not begin and end with concordant memories amplified by psychological imagination.[3]  In medicine, there is an irremediably visceral dimension to empathy, and we shall consider it further in the next posting.


[1] Donald A. Schön, The Reflective Practitioner: How Professionals Think in Action (NY: Basic Books, 1983); Donald A. Schön, Educating the Reflective Practitioner (San Francisco: Jossey-Bass, 1987).

[2] Samuel James Crowe, Halsted of Johns Hopkins: The Man and His Men (Springfield, IL: Thomas, 1957), pp. 130-31.

[3] The imaginative  component of empathy, which is more relevant to its function in psychotherapy than in medicine, is especially stressed by Alfred Margulies, “Toward Empathy: The Uses of Wonder,” American Journal of Psychiatry, 141:1025-1033, 1984.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

Medical Toys, Old and New

“The plethora of tests available to the young clinician has significantly eroded the skills necessary to obtain adequate histories and careful physical examinations.  Day in and day out, I encounter egregious examples of misdiagnosis engendered by inadequacies in these skills.”                                ~William Silen, M.D. “The Case for Paying Closer Attention to Our Patients” (1996)

Treat the Patient, Not the CT Scan,” adjures Abraham Verghese in a New York Times op-ed piece of February 26, 2011.  Verghese targets American medicine’s overreliance on imaging tests, but, like others before him, he is really addressing the mindset that fosters such overreliance.  Preclinical medical students, he reminds us, all learn physical examination and diagnosis, but their introduction to the art dissipates under the weight of diagnostic tests and specialist procedures during their clinical years.  “Then,” he writes, “they discover that the currency on the ward seems to be ‘throughput’ – getting tests ordered and getting results, having procedures like colonoscopies done expeditiously, calling in specialists, arranging discharge.”  In the early 90s, William Silen, Harvard’s Johnson and Johnson Distinguished Professor of Surgery,[1] made the same point with greater verve.  In one of his wonderful unpublished pieces, “Lumps and Bumps,” he remarked that “the modern medical student, and most physicians, have been so far removed from physical diagnosis, that they simply do not accept that a mass is a mass is a mass unless the CT scan or ultrasound tells them it is there.”

Verghese and Silen get no argument from me on the clinical limitations and human failings associated with technology-driven medicine.  But these concerns are hardly unique to an era of CT scans and MRIs.  There is a long history of concern about overreliance on new technologies;  Silen has a delightfully pithy, unpublished piece on the topic that is simply titled, “New Toys.”

One limitation of such critiques is the failure to recognize that all “toys” are not created equal.  Some new toys become old toys, at which point they cease being toys altogether and simply become part of the armamentarium that the physician brings to the task of physical examination and diagnosis.  For example, we have long since stopped thinking of x-ray units, EKG machines, blood pressure meters (i.e., sphygmomanometers), and stethoscopes as “new toys” that militate against the acquisition of hands-on clinical skill.

But it was not always so.  When x-rays became available in 1896, clinical surgeons were aghast.  What kind of images were these?  Surely not photographic images in the reliably objectivistic late-nineteenth century sense of the term.  The images were wavy, blurry, and imprecise, vulnerable to changes in the relative location of the camera, the x-ray tube, and the object under investigation.  That such monstrously opaque images might count as illustrative evidence in courts of law, that they might actually be turned against the surgeon and his “expert opinion”  – what was the world coming to?  Military surgeons quickly saw the usefulness of x-rays for locating bullets and shrapnel, but their civilian colleagues remained suspicious of the new technology for a decade or more after its invention.  No fools, they resorted to x-rays only when they felt threatened by malpractice suits.

Well before the unsettling advent of x-ray photography, post-Civil War physician-educators were greatly concerned about the use of mechanical pulse-reading instruments.  These ingenious devices, so they held, would discourage young physicians from learning to appreciate the subtle diagnostic indicators embedded in the pulse.  And absent such appreciation, which came only from prolonged training of their fingertips, they could never acquire the diagnostic acumen of their seniors, much less the great pulse readers of the day.

Thus they cautioned students and young colleagues to avoid the instruments.  It was only through “the habit of discriminating pulses instinctively” that the physician acquired  “valuable truths . . . which he can apply to practice.”  So inveighed the pioneering British physiologist John Burdon-Sanderson in 1867.  His judgment was shared by a generation of senior British and American clinicians for whom the trained finger remained a more reliable measure of radial pulse than the sphygmograph’s arcane tracings.  In The Pulse, his manual of 1890, William Broadbent cautioned his readers to avoid the sphygmograph, since interpretation of its tracings could “twist facts in the desired direction.”  Physicians should “eschew instrumental aids and educate the finger,” echoed Graham Steell in The Use of the Sphygmograph in Medicine at the century’s close.[2]

Lower still on the totem pole of medical technology, indeed about as low down as one can get – is the stethoscope, “invented” by René Laennec in 1816 and first employed by him in the wards of Paris’s Hôpital Necker (see sidebar).  In 1898, James Mackenzie, the founder of modern cardiology, relied on the stethoscope, used in conjunction with his own refinement of the Dudgeon sphygmograph of 1881 (i.e., the Mackenzie polygraph of 1892), to identify what we now term atrial fibrillation.  In the years to follow, Mackenzie, a master of instrumentation, became the principal exponent of what historians refer to as the “new cardiology.” His “New Methods of Studying Affections of the Heart,” a series of articles published in the British Medical Journal in 1905, signaled a revolution in understanding cardiac function.  “No man,” remarked his first biographer, R. McNair Wilson, in 1926, “ever used a stethoscope with a higher degree of expertness.”  And yet this same Mackenzie lambasted the stethoscope as the instrument that had “not only for one hundred years hampered the progress of knowledge of heart affections, but had done more harm than good, in that many people had had the tenor of their lives altered, had been forbidden to undertake duties for which they were perfectly competent, and had been subject to unnecessary treatment because of its findings’.”[3]

Why did Mackenzie come to feel this way?  The problem with the stethoscope was that the auscultatory sounds it “discovered,” while diagnostically illuminating, could cloud clinical judgment and lead to unnecessary treatments, including draconian restrictions of lifestyle.  For Mackenzie,  sphygmomanometers were essentially educational aids that would corroborate what medical students were learning to discern through their senses.  And, of course, he allowed for the importance of such gadgetry in research.  His final refinment of pulse-reading instrumentation, the ink jet polygraph of 1902 (see sidebar), was just such a tool.  But it was never intended for generalists, whose education of the senses was expected to be adequate to the meaning of heart sounds.  Nor was Mackenzie a fan of the EKG, when it found its way into hospitals after 1905.  He perceived it as yet another “new toy” that provided no more diagnostic information than the stethoscope and ink jet polygraph.  And for at least the first 15 years of the machine’s use, he was right.

Now, of course, the stethoscope, the sphygmomanometer, and, for adults of a certain age, the EKG machine are integral to the devalued art of physical examination.  Critics who bemoan the overuse of CT scans and MRIs, of echocardiography and angiography, would be happy indeed  if medical students and residents spent more time examining patients and learning all that can be learned from stethoscopes, blood pressure monitoring, and baseline EKGs.  But more than a century ago these instrumental prerequisites of physical examination and diagnosis were themselves new toys, and educators were wary of what medical students would lose by relying on them at the expense of educating their senses.  Now educators worry about what students lose by not relying on them.

Toys aside, I too hope  that those elements of physical diagnosis that fall back on one tool of exquisite sensitivity – the human hand – will not be lost among reams of lab results and diagnostic studies.  One shudders at the thought of a clinical medicine utterly bereft of the laying on of hands, which is not only an instrument of diagnosis but also an amplifier of therapy.  The great pulse readers of the late nineteenth century are long gone and of interest only to a handful of medical historians.  Will the same be true, a century hence, of the great palpators of the late twentieth?


[1] I worked as Dr. Silen’s editor in 2000-2001, during which time I was privileged to read his unpublished lectures, addresses, and general-interest medical essays as preparation for helping him organize his memoirs.  Sadly, the memoirs project never materialized.

[2] In this paragraph, I am guided especially by two exemplary studies, Christopher Lawrence, “Incommunicable Knowledge: Science, Technology and the Clinical Art in Britain, 1850-1914,” J. Contemp. Hist., 20:503-520, 1985 and Hughes Evans, “Losing Touch: The Controversy Over the Introduction of Blood Pressure Instruments in Medicine, “ Tech. Cult., 34:784-807, 1993.  Broadbent and Steell are quoted from Lawrence, p. 516.

[3] R. McNair Wilson, The Beloved Physician: Sir James Mackenzie (New York:  Macmillan, 1926), pp. 103-104. A more recent, detailed account of Mackenzie’s life and career is Alex Mair, Sir James Mackenzie, M.D., 1853-1925 – General Practitioner (London: Royal College of General Practitioners, 1986).

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

A Musical Offering

“There is no doubt in my mind that the study of music aids in the study of medicine.  The study of one appears to potentiate the other.”                                  ~ F. William Sunderman, M.D., Musical Notes of a Physician (1982)

My father, William Stepansky, whose lifelong passion for violin and string chamber playing is recorded in The Last Family Doctor, holds a singular status in the history of World War II.  He was the only GI in the US Army to join his infantry division at the D-Day staging area in the dripping heat of Yuma, Arizona with a violin in tow.  Wiser minds prevailed, and he sent the violin home, but not before knowledge of his unlikely traveling companion spread throughout the unit.  Here he is in army fatigues playing violin in Germany in the final year of World War II.  The violin in question, a fine mid-eighteenth-century instrument by the Prague maker Thomas Andreas Hulinzky (1731-1788), was the gift of a group of GIs who retrieved it in the vicinity of the famous Hohner Factory in Trossingen, Germany in early 1945.  Apparently knowing about the violin-toting  kid from South Philly who was a surgical tech, they brought the “liberated” fiddle back to his surgical clearing station and offered it to him – but only on condition that he play it for them then and there.

He obliged them, and the Hulinzky became his violin for the next half century, the instrument that provided his four sons with childhoods suffused with Bach, Mozart, Beethoven, Schubert, Mendelssohn, Brahms, and a host of later Romantics.  We learned standard repertoire through my father’s practicing and chamber rehearsals and performances.  And we played with him ourselves – two pianists, a violinist, and a cellist.   Here is a family trio in action in 1976; this one includes David Stepansky (piano), Alan Stepansky (cello), and William Stepansky (violin).

Why the violin mattered so much to our father and how it entered into the kind of medicine he practiced – I forego these precious bits of biography here.  But I cannot resist commenting on Danielle Ofri’s recent piece, “Music Teachers for Doctors?”, since it stimulates memories of growing up with a  gifted physician-musician in a remarkably musical household.

Picking up on a recent article by the internist Frank Davidoff, Ofri suggests that the critical feedback that performing musicians receive throughout their careers from “teachers, audiences, critics and their own ears” exemplifies the kind of “coaching” that, in parallel fashion, might aid physicians in the “performance” of clinical medicine.  Senior colleagues who observe physicians as they actually practice medicine and provide “detailed feedback,” she suggests, might play the same energizing motivational role as music teachers and coaches who relentlessly critique their students’ performance skills and spur them to higher levels of artistry.

It’s a bit difficult to understand precisely what Ofri and Davidoff mean by the term “performance.”  It seems to be a catchall for clinical competence, of which creative problem solving; emotional engagement;  collaborative sensibility; and enthusiasm for the work are all components.  The greater one’s allotment of these attributes, the better one “performs” clinical medicine.

Davidoff, who observes that most musicians “perform almost exclusively in groups,” is especially taken with the interactive and cognitive yield of group performance.  It inculcates a better sense of “teamwork,” of the “specialization” associated with one’s particular instrument, and of the importance of clinical improvisation, i.e., the wherewithal to deviate from established protocols in response to the needs of “particular patients in particular contexts.”  Ofri, for her part, sees clinical coaching modeled on musical coaching as especially relevant to the creeping complacency, the mid-career “plateaus” to which all-too-human physicians are subject.  Musical-cum-clinical coaching would remind physicians that the quest for “performance” excellence is never ending; it might goad them to further career growth and renewed satisfaction in their work.

Between these two contrasting emphases, I am more aligned with Davidoff, though I question whether the value of group performance is best thought of in terms of an interactional concept such as “teamwork” or a cognitive skill such as improvisation.  The improvement to which “good enough” doctors  should seek ongoing feedback is of a different sort.   What they could really use is a dose of the sensibility associated with chamber music playing, for ensemble work feeds into the art and craft of dialogue, of give and take, of respectful recognition of different voices moving together in accord with the anatomical blueprint provided by a composer.

Rather than provide “coaches” who give physicians feedback to improve their clinical “performance,” let’s take them right to the medicomusical locus itself.  To wit, let’s make instrumental training and chamber performance an integral part of postgraduate clinical training.  And, by way of bringing this utterly fanciful notion back down to earth – or at least within earth’s orbit – let’s give physicians “continuing education” credit both for playing in musical ensembles and for taking seminars and workshops on how to listen to chamber music.

But let’s play out the original fantasy.  What a boon to patients if doctors were obliged to make music among themselves.  Better still if they made music with nonphysicians and best of all if they made music with current or former patients.  In so doing, they might build up tacit knowledge relevant to caring for people rather than diagnosing and treating illnesses.  From chamber playing, they would learn about harmonious interchange, about counterpoint (as Davidoff points out), about respecting intelligent voices other than their own.  Tacit knowledge – the concept comes to us from the Hungarian-British scientist and philosopher Michael Polanyi – by its nature resists articulation among those who possess it.  But, speaking from the outside, let me say that the tacit knowledge I have in mind hovers around the art of doctor-patient communication, of learning to converse with patients in a manner more responsive to individual illness experiences.  Here is my claim:  Chamber music holds the promise of broadening, however tacitly, the physician’s sense of dialogic possibility when he or she is with patients.

Consider chamber music as a special kind of narrative journey.  It is not the hero’s journey that culminates in individual redemption and rebirth but the journey of individual voices conjoined in pursuit of something communal, call it musical truth.  Individual voices assert their authority; they take the lead when called on, enabling other voices to play their supportive roles with greater sensitivity and more perfect ensemble.[1]  The voices interweave and interpenetrate; they invade musical space only to make graceful exits; they weave exquisite filigrees around the melodic inventiveness of others; they learn to yield melodic pride of place, only to reclaim it once again as transformed by other voices.  Out of the dialogue emerges a tapestry whose separate threads merge in single-minded  purpose.  This interpretive solidarity, this playing together, is hemmed in by a composer’s intentions, framed by the compositional traditions and performance practices of a particular time and place.  Absent any single voice, the journey can not be undertaken; there is no tapestry to be woven.

What have we here if not a musical analogue of the kind of narrative competence associated with patient- and relationship-centered medicine?  Interweaving, interpenetrating, entering and exiting, listening to other voices, yielding to other voices, hearing differently because of other voices – aren’t these things at the heart of narrative medicine?

This was the medicine that my father practiced long before scholars decided to study “narrative medicine” as a kind of medicine.  As a violinist, he was especially drawn to Bach, but also to the early and late Romantics, and he could shape a phrase with the same warmth and control with which he helped patients reshape the personal stories they brought to him.  He was a charismatic listener who encouraged his patients to bring him their stories.  But he always listened as their doctor, with the quiet authority to decide on a course of treatment and reassure them all was in hand – because all was in his  mind.  He was, after all, the first violinist, the leader of the quartet.  He knew the score, and he was comfortable taking the lead, both with other chamber players and with his patients.

My father was a man of great modesty and reserve, but his violin always soared with controlled passion.  Just so in medicine:  his personal reserve never diminished the polyphonic textures and expressive sonorities of his medicine, of how he listened to and conversed with his patients and then proceeded to doctor them.

_________________

My father’s model for a life of medicine and music was F. William Sunderman. One of Philadelphia’s premier internists, Sunderman was an accomplished medical researcher at Pennsylvania Hospital.  During the 1930s, he developed new methods for measuring blood cholesterol, glucose, and chloride and invented  the Sunderman Sugar Tube.  During the war, he served as Medical Director of the Manhattan Project, where he developed an antidote for the nickel carbonyl piosoning to which the first atomic bomb assemblers were prone.

Among Philadelphia’s literati, however, Sunderman was best known for integrating medicine and music into a single exemplary life.  The side-by-side entrances of his four-story brownstone at 2210 Delancy Street bore complementary aluminum plaques, one adorned with a caduceus, the other with a lyre and singing bird.  To the former came his patients; to the latter the musicians – eminent scientists and physicians among them – who joined him regularly for chamber music.  Sunderman’s passion for the violin, which in his case was embodied in a collection of exquisite instruments – a Gagliano, a Vuillaume, a Guarnarius – was fast becoming part of Philadelphia’s cultural landscape.

What must Sunderman have thought when, in the fall of 1947, a young war veteran recently graduated from Philadelphia College of Pharmacy wrote him and requested an audience.   The young man, a Jewish immigrant whose parents fled the Russian Pogroms in 1921, knocked on Sunderman’s “musical” door with his GI-liberated Hulinzky in hand.  He announced without fanfare:  “I want to be like you.  My life will be medicine and music.”  And without further ado, my father played for him.  And Sunderman was impressed, both with the young man and with his playing.  And so he endorsed the violinist-pharmacist-serviceman’s application for admission to Philadelphia’s Jefferson Medical College.

Among the area musicians who joined Sunderman for chamber playing on a regular basis was a young cellist and Harvard graduate (class of ’32).  At Harvard, Robert U. Jameson not only played cello but rose to the presidency of the Pierian Sodality, the forerunner of the Harvard-Radcliffe Orchestra.  (In 1941, nine years after Jameson’s graduation, the Orchestra performed his orchestration of  Edward Ballantine’s “Variations on Mary Had a Little Lamb.”)   Now in the real world, Jameson made his living teaching English at The Haverford School on Philadelphia’s Main Line.  An extraordinary teacher who seemed to know his students from the inside out and hence to understand exactly what they needed (viz., how to “coach” them),  Jameson remained a devotee of cello throughout his life.  In the mid-1970s, in failing health but still in the saddle at Haverford, he counted among his students a remarkably gifted young cellist. The two bonded, and the teacher became admiring of his student and warmly supportive of his career aspirations.

The young man played the Saint-Saens Cello Concerto at his teacher’s memorial concert in the spring of 1978, one cellist saluting another, and then moved on to the Curtis Institute, Harvard, and the New York Philharmonic, where he served as Associate Principal Cellist. The cellist, my brother Alan Stepansky, is now Professor of Cello at the Peabody Conservatory and Manhattan School of Music  and a performer and teacher of international stature.

A decade earlier, in the late 1960s, this same Robert U. Jameson took in hand another student who showed  promise as a writer but was in need of the kind of disciplining a chamber-music-playing English teacher could provide. Jameson, a student, like all cellists, of Bach’s Cello Suites, helped him harness luxuriant adolescent prose and understand that restraint is the handmaiden of passion, indeed, that it is the better part of a writer’s valor.  I was that student.  Did Mr. Jameson’s love of cello and chamber playing enter into his understanding of language and his ability to impart, perhaps tacitly, the elements of well-wrought narratives?  Did his instrument help make him the teacher and mentor he was?  Who’s to say it didn’t.


[1] The importance of taking the lead in chamber playing and the manner in which a strong leader enables other players to provide support “with greater sensitivity and more perfect ensemble” – this insight and this wording come from my brother Alan Stepansky.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

An Irony of War

“There are two groups of people in warfare – those organized to inflict and those organized to repair wounds – and there is little doubt but that in all wars, and in this one in particular, the former have been better prepared for their jobs” (Milit. Surg., 38:601, 1916).  So observed Harvey Cushing, the founder of modern neurosurgery, a year before America’s entry into World War I.  Cushing’s judgment is just, and yet throughout history “those organized to repair wounds” have risen to the exigencies  of the war at hand.  In point of fact, warfare has spurred physicians, surgeons, and researchers to major, sometimes spectacular, advances, and their scientific and clinical victories are bequeathed  to civilian populations that inherit the peace.  Out of human destructiveness emerge potent new strategies of protection, remediation, and self-preservation.  Call it an irony of war.

Nor are these medical and surgical gifts limited to the era of modern warfare.  The French army surgeon Jean Louis Petit invented the screw tourniquet in 1718; it made possible leg amputation above the knee.  The Napoleonic Wars of the early nineteenth century brought us the first field hospitals along with battlefield nursing and ambulances.  The latter were of course horse-drawn affairs, but they were exceedingly fast and maneuverable and were termed “flying ambulances.”  The principle of triage — treating the wounded, regardless of rank, according to severity of injury and urgency of need – is not a product of twentieth-century disasters.  It was devised by Dominique Jean Larrey, Napoleon’s surgeon-in-chief from 1797 to 1815.

The American Civil War witnessed the further development of field hospitals and the acceptance, often grudging, especially among southern surgeons, of female nurses tending to savaged male bodies.  Hospital-based training programs for nurses were a product of wartime experience.  Civil War surgeons themselves broached the idea shortly after the peace, and the first such programs opened  in New York, Boston, and New Haven hospitals in 1873.  The dawning appreciation of the relationship between sanitation and prevention of infection, which would blossom into the “sanitary science” of the 1870s and 1880s, was another Civil War legacy.

And then there were the advances, surgical and technological, in amputation.  They included the use of the flexible chain saw to spare nerves and muscles and even, in many cases of comminuted fracture, to avoid amputation entirely.  The development of more or less modern vascular ligation – developed on the battlefield to tie off major arteries extending from the stumps of severed limbs – is another achievement of Civil War surgeons.  Actually, they rediscovered ligation, since the French military surgeon Amboise Paré employed it following battlefield amputation in the mid-sixteenth century, and he in turn was reviving a practice employed in the Alexandrian Era of the fourth century B.C.

In 1900 Karl Landsteiner, a Viennese pathologist and immunologist, first described the ABO system of blood groups, founding the field of immunohematology.  As a result, World War I gave us blood banks that made possible blood transfusions among wounded soldiers in the Army Medical Corps in France.  The First World War also pushed medicine further along the path to modern wound management, including the treatment of cellulitic wound infections, i.e., bacterial skin infections that followed soft tissue trauma.  Battlefield surgeons were quick to appreciate the need for thorough wound debridement and delayed closure in treating contaminated war wounds.  The prevalence of central nervous system injuries – a tragic byproduct of trench warfare in which soldiers’ heads peered anxiously above the parapets  – led to “profound insights into central nervous system form and function.” The British neurologist Gordon Holmes provided elaborate descriptions of spinal transections (crosswise fractures) for every segment of the spinal cord, whereas Cushing, performing eight neurosurgeries a day, “rose to the challenge of refining the treatment of survivors of penetrating head wounds” (Arch. Neurol., 51:712, 1994).  His work from 1917 “lives today” (ANZ J. Surg., 74:75, 2004).

No less momentous was the development of reconstructive surgery by inventive surgeons (led by the New Zealand ENT surgeon Harold Gillies) and dentists (led by the French-American Charles Valadier) unwilling to accept the gross disfigurement of downed pilots who crawled away from smoking wreckages with their lives, but not their faces, intact.  A signal achievement of wartime experience with burn and gunshot victims was Gillies’s Plastic Surgery of the Face of 1920; another was the founding of the American Association of Plastic Surgeons a year later.  After the war, be it noted, the pioneering reconstructive surgeons refused to place their techniques at the disposal of healthy women (and less frequently healthy men) desirous of facial enhancement; reconstructive facial surgery went into short-lived hibernation.  One reason reconstructive surgeons morphed into cosmetic surgeons was the psychiatrization of facial imperfection via Freudian and especially Adlerian notions of the “inferiority complex,” with its allegedly life-deforming ramifications.  So nose jobs became all the rage in the 1930s, to be joined by facelifts in the postwar 40s. (Elizabeth Haiken’s book Venus Envy: A History of Cosmetic Surgery [1997] is illuminating on all these issues.)

The advances of World War II are legion.  Among the most significant was the development or significant improvement of 10 of the 28 vaccine-preventable diseases identified in the twentieth century (J. Pub. Health Pol., 27:38, 2006);  new vaccines for influenza, pneumococcal pneumonia, and plague were among them.   There were also new treatments for malaria and the mass production of penicillin in time for D-Day.  It was during WWII that American scientists learned to separate blood plasma into its constituents (albumin, globulins, and clotting factors), an essential advance in the treatment of shock and control of bleeding.

No less staggering were the surgical advances that occurred during the war. Hugh Cairns, Cushing’s favorite student, developed techniques for the repair of the skull base and laid the foundation of modern craniofacial surgery by bringing together neurosurgeons, plastic surgeons, and ophthalmic surgeons in mobile units referred to as “the trinity.”   There were also major advances in fracture and wound care along with the development of hand surgery as a surgical specialty.   Wartime treatment experience with extreme stress, battlefield trauma, and somatization (then termed, in Freudian parlance, “conversion reactions”) paved the way for the blossoming of psychosomatic medicine in the 1950s and 1960s.

The drum roll hardly ends with World War II.  Korea gave us the first air ambulance service.  Vietnam gave us Huey helicopters for evacuation of wounded soldiers.  (Now all trauma centers have heliports.)  Prior to evacuation, these soldiers received advanced, often life-saving, care from medical corpsmen who opened surgical airways and performed thoracic needle decompressions and shock resuscitation; thus was born our modern system of prehospital emergency care by onsite EMTs and paramedics.  When these corpsmen returned to the States, they formed the original candidate pool for Physician Assistant training programs, the first of which opened its doors at Duke University Medical Center in 1965.  Vietnam also gave us major advances in vascular surgery, recorded for surgical posterity in the “Vietnam Vascular Registry,” a database with records of over 8000 vascular wound cases contributed by over 600 battlefield surgeons.

The medical and surgical yield of recent and ongoing wars in the Persian Gulf will be recorded in years to come.  Already, these wars have provided two advances for which all may give thanks:  portable intensive care units (“Life Support for Trauma and Transport”) and Hem-Con bandages.  The latter, made from extract of shrimp cells, stop severe bleeding instantaneously.

Now, of course, with another century of war under our belt and the ability to play computer-assisted war games, we are better able to envision the horrific possibilities of wars yet to come.  In the years leading up to World War I, American surgeons – even those, like Harvey Cushing, who braced themselves for war – had no idea of the human wreckage they would encounter in French field hospitals.  Their working knowledge of war wounds relied on the Boer War (1899-1900), a distinctly nineteenth-century affair, militarily speaking, fought in the desert of South Africa, not in trenches in the overly fertilized, bacteria-saturated soil of France.  Now military planners can turn to databases that gather together the medical-surgical lessons of two World Wars, Korea, Vietnam, Iraq, Afghanistan, and any number of regional conflicts.

Military simulations have already been broadened to include political and social factors.  But military planners should also be alert to possibilities of mutilation, disfigurement, multiple-organ damage, and drug-resistant infection only dimly imagined.  Perhaps they can broaden their simulations to include the medical and surgical contingencies of future wars and get bench scientists, clinical researchers, and surgeons to work on them right away.  Lucky us.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

“Socialized Medicine,” anyone?

The primary season is upon us, which means it’s time for Republicans to remind us of the grave perils of “socialized medicine.”  One-time candidate Michele Bachmann accuses Mitt Romney  of “put[ting] into place socialized medicine” when governor of Massachusetts.  Newt Gingrich, rejecting Romney’s defense of the Massachusetts law as something other than socialist, declares that  “Individual and employer mandates are bad policy leading down the road to socialized medicine, whether the mandates are adopted at the federal level or the state level.”  Ron Paul, not to be outdone, derides our health care system as “overly corporate and not much better than a socialized health care system.”  Rick Santorum mournfully announces that socialized medicine is “exactly where we’re headed.”  And then of course there is that noncandidate and subtle political thinker Sarah Palin, who apparently tolerated Canadian single-payer health care well enough when it was available to her and her family members, but never fails to lambast the health care reform bill of 2010 (“Obamacare”) as the great evil, the capitulation to socialist medicine that will lead us straight into the bowels of socialist hell.

As a historian of ideas, I am confused.  What exactly do these Republicans mean by “socialized medicine” and, more generally, by “socialism”?  Are they referring to the utopian socialism of the early nineteenth century that arose in the wake of the French Revolution, the socialism of Charles Fourier, Henri Saint-Simon, and Joseph Le Maistre?  Are they referring to Marxist socialism and, if so, which variant?  The socialism of the early Marx, the Marx of the  economic and philosophical manuscripts of 1844 and The German Ideology or the socialism of the late Marx, the Marx of Das Kapital?  It is difficult to imagine the candidates rejecting the conservative socialism of Otto Bismarck, the German Iron Chancellor who, during the 1870s and 1880s, wed social reform to a conservative vision of society.  But then again they might:  Bismarck’s reforms, which included old-age pensions, accident insurance, medical care, and unemployment insurance, paved the way for the triumph, despite Bismarck’s own antisocialist laws, of Germany’s Social Democratic Party in the early twentieth century.

Perhaps the Republicans mean to impugn a broader swath of post-Marxist reformist socialism (also termed “democratic socialism”).  Does their antipathy take in the British liberal welfare reforms of David Lloyd George that from  around 1880 to 1910 constructed Britain’s social welfare state?  After all, Britain’s National Insurance Act of 1911 provided for health insurance, and many of Lloyd George’s  acts aimed at the health and well-being of British children.  Child labor laws, medical inspection of schools, and medical care for school children via free school clinics were among them.  Certainly all the candidates would repudiate FDR’s New Deal.  Depression or no, it was a medley of socialist programs that culminated in a social security program that workers could not opt out of.  But then again, perhaps the candidates do not understand socialism as the cumulative protections of democratic socialism.  Maybe the socialism they impugn is only hard-core late Marxism and its transmogrification after 1917 into Soviet Marxism-Leninism, both of which now slumber peacefully in the dustbin of history.  I don’t know.  Does anyone?  Maybe some of these candidates only see red when contemplating employment of physicians by the state.

When it comes to “socialized medicine,” just how far do the Republicans seek to turn back the clock?   Does more than a century of social welfare reform have to go?  Certainly they must repudiate Medicare and Medicaid, whose passage in 1965 was, with respect to the elderly and indigent, socialism pure and simple; for the AMA these programs sounded the death knell of democracy.  But why stop there?  If they really want to root out medical socialism, they can hardly condone Medicare’s precursor, the Kerr-Mill Act of 1960 that made federal matching funds available to states that underwrote the costs of health care for their indigent elderly.

And what of the FDA, that competition-draining, creativity-stifling offspring of Rooseveltian socialist thinking.  Who is the government to tell medical equipment manufacturers which devices they may sell to doctors and the public?  The 1976 Medical Devices Amendments to the Federal Food, Drug and Cosmetic Act of 1938 would have to go.  The more than 700 deaths and 10,000 injuries attributed to defective cardiac pacemakers and leaky artificial heart valves by the Cooper Commission in 1970, not to mention the 8,000 women injured (some left sterile) by their faulty contraceptive Dalkon Shields – this was a small price to pay for an open marketplace that encouraged and rewarded innovation.  The 1962 Kefauver–Harris Amendments to the Federal Food, Drug and Cosmetic Act of 1938, which arose in the wake of the thalidomide tragedy of 1961, would probably fare no better.  After all, these amendments dramatically expanded the FDA’s authority over prescription drugs by requiring drug companies to conduct preclinical trials of toxicity and then present the FDA with adequate and well controlled studies of drug effectiveness  before receiving regulatory approval.  I wonder if principled antisocialists can even abide the FDA-enforced distinction between prescription-only and nonprescription drugs, as codified in the 1951 Durham-Humphrey Amendment to the 1938 Act.  Before then, Americans did just fine self-medicating without government interference.  Sure they did.  Citizens of the late 30s could be relied on to decide when to take the toxic sulfonamides (which depressed white cell counts and led to anemias), just as citizens of the late 40s knew enough pharmacology and bacteriology to decide when and in what dosages to use the potent antibiotic “wonder drugs,” all of which could be obtained over-the-counter or directly from pharmacists until the 1951 Act.

But why stop there?  Perhaps Republican political philosophy obliges the candidates to repudiate the Federal Food, Drug and Cosmetic Act in toto.  After all, it authorized the FDA, a federal agency, to review the safety and composition of new drugs before authorizing their release.  Yes, the legislation arose in the wake of 106 deaths the preceding year – many children among them – from sales of the Tennessee drug firm S. E. Massengill’s Elixir Sulfanilamide.  The Elixir was a sweet-tasting liquid sulfa drug that – unbeknown to anyone outside the company — used toxic diethylene glycol (a component of brake fluid and antifreeze) as solvent.  But, hey, this was free-market capitalism in action.  Sure, hundreds more would have died if all 239 FDA inspectors hadn’t tracked down 234 of the 240 gallons of the stuff already on the market.  But is this really any worse than having 10,000 or so European and Japanese kids grow up with flippers instead of arms and hands because their pregnant mothers, let down by the regulatory bodies of their own countries, ingested Chemie Grünenthal’s sedative thalidomide to control first-trimester morning sickness?  A free medical marketplace has its costs, dead kids, deformed kids, and sterile women among them.  Perhaps, in the Republican vision of American health care, this marketplace had every right to bestow on Americans their own generation of thalidomide babies, not just the small number whose mothers received the drug as part of the American licensee’s advance distribution of samples to 1,267 physicians.

If we’re going to turn back the clock and recreate a Jacksonian medical universe free of intrusive, expensive, innovation-stifling, rights-abrogating big government, let’s go the full nine yards.  Let’s repudiate the Pure Food and Drugs Act of 1906, which compelled drug companies to list the ingredients of drugs on the drug labels.  Sure, prior to the act most remedies aimed at children were laced with alcohol, cocaine, opium, and/or heroin, but was this so bad?  At least these tonics, unlike Elixir Sulfanilamide, didn’t kill the kids, and the 1906 Act did put us on the path to government overregulation.  And, anyway, it’s up to parents, not the federal government, to figure out what their kids ingest.  Let them do their own chemical analyses (or better yet, contract unregulated for-profit labs to do the analyses for them) and slug it out with the drug companies.

And, while we’re at it, let’s roll back the establishment in 1902 of the brazenly socialistic Public Health and Marine Hospital Service, with its “big government” division of pathology and bacteriology.  Okay, it did a few things Republican candidates would likely applaud, like preventing incoming immigrants from coming ashore with infectious diseases like cholera, yellow fever, and bubonic plague.  But the Service couldn’t leave well enough alone. With its federal budget and laboratory of government employees, it went on to identify infectious diseases like typhoid fever, tularemia, and undulant fever.  Then, during World War I, after its name had been shortened to the Public Health Service, it isolated the organisms responsible for epidemic meningitis and developed tetanus antitoxin and antityphoid vaccine.  But, hey, private enterprise of the time would have addressed these issues better and more cost effectively, right?  And it wouldn’t have placed us on the road to socialist perdition.

Compulsory vaccination for smallpox and diphtheria?  State laws that beginning in 1893 required public schools to exclude from enrollment any student who could not present proof of vaccination?  Forget it.  States and municipalities had no right forcibly to intrude into the lives of children with their public health inspectors, followed by school physicians with their vials of toxin-antitoxin.  What was this if not socialist medicine, with the state abrogating the rights of parents and school principals alike – the former with the right to keep their children unvaccinated, that they might contract infection and pass it on to classmates and family members; the latter with the right to keep school enrollment as high as possible without government interference.

Here’s the point of this exercise in conjecture:  If we’re going to have a national debate about health care, then our candidates must cease and desist from using evocative words that incite fear and loathing but mean nothing because they mean anything and everything.  You can’t have a debate without people capable of debate, which is to say, people who grasp ideas as something other than sound bites that mobilize primitive emotions.  Debaters make arguments and cite evidence that support them; they don’t throw out words and wait for a primal scream.

It would be nice if we had presidential candidates willing and able to explain their take on specific ideas and then wrestle with the applicability of those ideas to the real-life problems of specific groups of Americans.  It would be nicer still if all this explaining and wrestling and applying were informed by the lessons of history.  I believe we will have such debates shortly after hell freezes over.  Therefore, I offer my own ideational stimulus package to inch us toward this goal.  I propose an Act of Congress that proscribes the use of certain words and phrases among all presidential candidates.  Each time a candidate uses a proscribed word or phrase in a campaign speech, a TV commercial,  or an internet posting, he or she, if nominated, forfeits one electoral vote earned in the general election.  In the realm of health care, “socialism,” “socialist medicine,’’ “big government,” “death panels,” “overregulation,”  “the people,” and “the American way” would top the list.  Such terms cannot be part of a national debate because they do not promote reasoned exchange.  They have emotional resonance but nothing else.  In fact, they preclude debate by allowing the word or phrase in question to carry an implicit meaning that reaches consciousness only as a gut-churning abstraction.  Gut-churning abstractions, be it noted, tend to be historically naïve and empirically empty.

So I end where I began:  What exactly do our Republican candidates mean by “socialized medicine” other than a global repudiation of the health care reform bill of 2010?  Do they mean that medicine was just socialist enough before the bill passed, but that specific components of the bill – like preventing insurers from denying coverage to people with preexisting conditions – take the country to a point of socialist excess serious enough to abrogate the new protections the bill affords uninsured and underinsured  Americans.  Or perhaps American health care, even before the legislation, was simply too socialist, so that it becomes incumbent on our elected leaders to turn back the clock, undo past legislative achievements, reverse specific governmental policies, and disembowel certain regulatory agencies.  But if the latter, exactly which laws and policies and agencies must be sacrificed on the altar of a free and open medical marketplace?   I don’t know what the Republican candidates have in mind, but I’m all ears – once they stop lobbing word grenades and actually make an argument.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

My Doctor, My Friend

In a piece several months ago  in the Boston Globe, “Blurred Boundaries Between Doctor and Patient,” columnist and primary care internist Suzanne Koven writes movingly of her patient Emma, whom Koven befriended over the last 15 years of Emma’s life.  “Emma and I met frequently to gossip, talk about books and politics, and trade stories about our lives,” she remarks.  “She came to my house for dinner several times, and my husband and kids joined me at her 90th birthday party.  When, at 92, Emma moved reluctantly into a nursing home, I brought her the bagels and lox she craved – rich, salty treats her doctor had long discouraged her from eating.  Here’s the funny part:  I was that doctor.”

Koven writes perceptively of her initial concern with doctor-patient boundaries (heightened, she admits, by her status as “a young female physician”), her ill-fated efforts to maintain her early ideal of professional detachment, and, as with Emma, her eventual understanding that the roles of physician and friend could be for the most part “mutually reinforcing.”

As a historian of medicine interested in the doctor-patient relationship, I reacted to Koven’s piece appreciatively but, as I confessed to her, sadly.  For her initial concern with “blurred boundaries” and her realization after years of practice about the compatibility of friendship with primary medical care only underscore the fragmented and depersonalized world of contemporary medicine, primary care included.  By this, I mean that the quality of intimacy that grows out of most doctoring has become so shallow that we are led to scrutinize doctor-patient “friendship” as a problematic (Is it good?  Is it bad?  Should there be limits to it?) and celebrate instances of such friendship as signal achievements.   Psychoanalysts, be it noted, have been pondering these questions in their literature for decades, but they at least have the excuse of their method, which centrally implicates the analysis and resolution of transference with patients who tend to become inordinately dependent on them.

My father, William Stepansky, like many of the WWII generation, befriended his patients, but he befriended them as their doctor.  That is, he understood his medicine to include human provisions of a loving and Hippocratic sort.  Friendly two-way extramedical queries about his family, contact at community events, attendance at local weddings and other receptions – these were not boundary-testing land mines but aspects of community-embedded caregiving.  But here’s the rub:  My father befriended his patients as their doctor; his friendship was simply the caring dimension of his care-giving.  What, after all, did he have in common with the vast majority of his patients?  They were Protestants and Catholics, members of the Rotary and Kiwanis Clubs who attended the local churches and coached little league baseball and Pop Warner football.  He was an intellectual East European Jew, a serious lifelong student of the violin whose leisure time was spent practicing, reading medical journals, and tending to his lawn.

And yet to his patients, he was always a special friend, though he himself would admit nothing special about it:  his friendship  was simply the human expression of his calling.  He did not (to my knowledge) bring anyone bagels and lox or pay visits to chat about books or politics, but he provided treatment (including ongoing supportive psychotherapy) at no charge, accepted payment in kind, and visited patients in their homes when they became too elderly or infirm to come to the office.  Other routine “friendly” gestures included charging for a single visit when a mother brought a brood of sick children to the office during the cold season.  And when elderly patients became terminal, they did not have to ask – he simply began visiting them regularly in their homes to provide what comfort he could and to let them know they were on his mind.

When he announced his impending retirement to his patients in the fall of 1990, his farewell letter began “Dear Friend” and then expressed regret at “leaving many patients with whom I have shared significant life experience from which many long-term friendships have evolved.”  “It has been a privilege to serve as your physician for these many years,” he concluded.  “Your confidence and friendship have meant much to me.”  When, in my research for The Last Family Doctor, I sifted through the bags of cards and letters that followed this announcement, I was struck by the number of patients who not only reciprocated my father’s sentiment but summoned the words to convey deep gratitude for the gift of their doctor’s friendship.

In our own era of fragmented multispecialty care, hemmed in by patient rights, defensive medicine, and concerns about boundary violations, it is far from easy for a physician to “friend” a patient as physician, to be and remain a physician-friend.  Furthermore, physicians now wrestle with the ethical implications of “friending” in ways that are increasingly dissociated from a medical identity.  Many choose to forego professional distance at the close of a work day.  No less than the rest of us, physicians seek multicolored self states woven of myriad connective threads; no less than the rest of us, they are the Children of Facebook.

But there is a downside to this diffusion of connective energy.  When, as a society, we construe the friendship of doctors as extramedical, when we pull it into the arena of depersonalized connecting fostered by social media, we risk marginalizing the deeper kind of friendship associated with the medical calling: the physician’s nurturing love of the patient.   And we lose sight of the fact that, until the final two decades of the 19th century,  when advances in cellular biology, experimental physiology, bacteriology, and pharmacology ushered in an era of specific remedies for specific ailments, most effective doctoring – excluding only a limited number of surgeries – amounted to little more than just such friendship, such comfortable and comforting “friending” of sick and suffering people.

And this takes us back to Suzanne Koven, who imputes the “austere façade” of her medical youth to those imposing 19th-century role models “whose oil portraits lined the walls of the hospital [MGH] in which I did my medical training.”  Among the grim visages that stared down from on high was that of the illustrious James Jackson, Sr., who brought Jenner’s technique of smallpox inoculation to the shores of Boston in 1800, became Harvard’s second Hersey Professor of the Theory and Practice of Medicine in 1812, and was a driving force in the founding of MGH, which opened its doors in 1821.  Koven cites a passage from the second of Jackson’s Letters to a Young Physician (1855) in which he urges his young colleague to “abstain from all levity” and “never exact attention to himself.”

But why should absence of levity and focal concern with the patient be tantamount to indifference, coolness, the withholding of physicianly friendship?  Was Jackson really so forbidding a role model?  Composing his Letters in the wake of the cholera epidemic of 1848, when “regular” remedies such as bleeding and purging proved futile and only heightened the suffering of  thousands, Jackson cautioned modesty when it came to therapeutic pretensions.  He abjured the use of drugs “as much as possible,” and added that “the true physician takes care of his patient without claiming to control the disease in all cases.” Indeed he sought to restore “cure” to its original Latin meaning, to curare, the sense in which “to cure meant to take care.”  “The physician,” he instructed his protégé,

“may do very much for the welfare of the sick, more than others can do, although he does not, even in the major part of cases, undertake to control and overcome the disease by art.  It was with these views that I never reported any patients cured at our hospital.  Those who recovered their health before they left the house were reported as well, not implying that they were made so by the active treatment they had received there.  But it was to be understood that all patients received in that house were to be cured, that is, taken care of” [Letters to a Young Physician, p. 16, Jackson’s emphasis].

And then he moved on to the narrowing of vision that safeguarded the physician’s caring values, his cura:

“You must not mistake me.  We are not called upon to forget ourselves in our regard for others.  We do not engage in practice merely from philanthropy.  We are justified in looking for both profit and honor, if we give our best services to our patients; only we must not be thinking of these when at the bedside.  There the welfare of the sick must occupy us entirely” [Letters to a Young Physician, pp. 22-23].

Koven sees the Hippocratic commitment that lies beneath Jackson’s stern glance and, with the benefit of hindsight, links it to her friendship with Emma. “As mutually affectionate as our friendship was,” she concludes, “her health and comfort were always its purpose.”  Indeed.  For my father and any number of caring generalists, friendship was prerequisite to clinical knowing and foundational to clinical caring.  It was not extramural, not reserved for special patients, but a way of being with all patients.  And this friendship for his patients, orbiting around a sensibility of cura and a wide range of procedural activities, was not a heavy thing, leaden with solemnity.  It was musical.  It danced.

In the early 60s, he returns from a nursing home where he has just visited a convalescing patient.  I am his travelling companion during afternoon house calls, and I greet him on his return to the car.  He looks at me and with a sly grin remarks that he has just added “medicinal scotch” to the regimen of this elderly gentlemen, who sorely missed his liquor and was certain a little imbibing would move his rehab right along.  It was a warmly caring gesture worthy of Osler, that lover of humanity, student of the classics, and inveterate practical joker.  And a generation before Osler, the elder Jackson would have smiled.  Immediately after cautioning the young physician to “abstain from all levity,” he added: “He should, indeed, be cheerful, and, under proper circumstances, he may indulge in vivacity and in humor, if he has any.  But all this should be done with reference to the actual state of feeling of the patient and of his friends.”  Just so.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.