Category Archives: Health Care Reform

Procedural Rural Medicine

“Primary care practice in the future may be more akin to an Amish barn-raising than care delivered by the fictional Marcus Welby.” – Valerie E. Stone, et al., “Physician Education and Training in Primary Care” (2010)[1]

Current proposals to remedy the crisis in primary care, especially among those Americans living in small, rural communities, are politically correct (or, in the case of J-1 waivers for foreign-trained physicians, ethically unacceptable) gestures.  Small adjustments in Medicare reimbursement schedules for physicians serving the underserved and unenforceable mandates by state legislatures that public medical schools “produce” more primary care physicians are all but meaningless.  Rural medicine programs at a handful of medical colleges basically serve the tiny number of rural-based students who arrive at medical school already committed to serving the underserved.  Such programs have had little if any impact on a crisis of systemic proportions.  If we want to pull significant numbers of typical medical students into primary care, we must empower them and reward them – big time.  So what exactly do we do?

  1. We phase out  “family medicine” for reasons I have adduced and replace it with a new specialty that will supplement internal medicine and pediatrics as core primary care specialties.  I term the new specialty procedural rural medicine (PRM) and physicians certified to practice it procedural care specialists.  Self-evidently, many procedural rural specialists will practice in urban settings.  The “rural” designation simply underscores the fact that physicians with this specialty training will be equipped to care for underserved populations (most of whom live in rural areas) who lack ready  access to specialist care.  Such care will be procedurally enlarged beyond the scope of contemporary family medicine.
  2. Procedural care specialists will serve the underserved, whether in private practice or under the umbrella of Federally Qualified Health Centers, Rural Health Centers, or the National Health Service Corps. They will  complete a four-year residency that equips all rural care specialists to perform a range of diagnostic and treatment procedures that primary care physicians now occasionally perform in certain parts of the country (e.g., colposcopy, sigmoidoscopy, nasopharyngoscopy), but more often do not.  It would equip them to do minor surgery, including office-based dermatology, basic podiatry, and wound management.   I leave it to clinical educators to determine exactly which baseline procedures can be mastered within a general four-year rural care residency, and I allow that it may be necessary to expand the residency to five years.  I further allow for procedural tracks within the final year of a procedural care program, so that certain board-certified procedural care specialists would be trained to perform operative obstetrics whereas others would be trained to perform colonoscopy.[2] The point is that all rural care proceduralists would be trained to perform a range of baseline procedures.  As such, they would be credentialed by hospitals as “specialists” trained to perform those procedures and would receive the same fee by Medicare and third-party insurers as the “root specialists” for particular procedures.
  3. Procedural care specialists will train in hospitals but will spend a considerable portion of their residencies learning and practicing procedurally oriented primary care in community health centers.  Such centers are the ideal venue for learning to perform “specialty procedures” under specialist supervision; they also inculcate the mindset associated with PRM, since researchers have found that residents who have their “continuity clinic” in community health centers are more likely to practice in underserved areas following training.[3]
  4. On completion of an approved four- or five-year residency in procedural rural medicine and the passing of PRM specialty boards, procedural care specialists will have all medical school and residency-related loans wiped off the books. Period.  This financial relief will be premised on a contractual commitment to work full-time providing procedural primary care to an underserved community for no less than, say, 10 years.
  5. Procedural care specialists who make this commitment deserve a bonus. They have become national resources in healthcare.  Aspiring big league baseball players who are drafted during the first four rounds of the MLB draft, many right out of high school, typically receive signing bonuses in the $100,000-$200,000 range.  In 2012, the top 100 MLB draftees each received a cool half million or more, and the top 50 received from one to six million.[4]  I propose that we give each newly trained procedural care specialist a $250,000 signing bonus in exchange for his or her 10-year commitment to serve the underserved.  Call me a wild-eyed radical, but I think physicians who have completed high school, four years of college, four years of medical school, and a four- or five-year residency program and committed themselves to bringing health care to underserved rural and urban Americans for 10 years deserve the same financial consideration as journeymen ball players given a crack at the big leagues.
  6. Taken together, the two foregoing proposals will make a start at decreasing the income gap between one group of primary care physicians (PCPs) and their colleagues in medical subspecialties and surgical specialties.  This gap decreases the odds of choosing primary care by nearly 50%; it is also associated with the career dissatisfaction of PCPs relative to other physicians, which may prompt them to retire earlier than their specialist colleagues.[5]
  7. I am not especially concerned about funding the debt waiver and signing bonuses for board-certified procedural care specialists.  These physicians will bring health care to over 60 million underserved Americans and, over time, they will be instrumental in saving the system, especially Medicare and Medicaid, billions of dollars.  Initial costs will be a  drop in the bucket in the context of American healthcare spending that consumed 17.9% of GDP in 2011.  Various funding mechanisms for primary care training – Title VII, Section 747 of the Public Health Service Act of 1963, the federal government’s Health Resources and Services Administration, Medicare – have long been in place, with the express purpose of expanding geographic distribution of primary care physicians in order to bring care to the underserved.  The Affordable Care Act of 2010 may be expected greatly to increase their funding.

————

These proposals offer an alternative vision for addressing the crisis in primary care that now draws only 3% of non-osteopathic physicians to federally designated Health Professional Shortage Areas and consigns over 20% of Americans to the care of 9% of its physicians.  The mainstream approach moves in a different direction, and the 2010 Macy Foundation-sponsored conference, “Who Will Provide Primary Care and How Will They Be Trained,” typifies it.  Academic physicians participating in the conference sought to address the crisis in primary care through what amounts to a technology-driven resuscitation of the “family practice” ideology of the late 1960s.  For them, PCPs of the future will be systems-savvy coordinators/integrators with a panoply of administrative and coordinating skills.  In this vision of things, the “patient-centered medical home” becomes the site of primary care, and effective practice within this setting obliges PCPs to acquire leadership skills that focus on “team building, system reengineering, and quality improvement.”

To be sure, docs will remain leaders of the healthcare team, but their leadership veers away from procedural medicine and into the domain of “quality improvement techniques and ‘system architecture’ competencies to continuously improve the function and design of practice systems.”  The “systems” in question are healthcare teams, redubbed “integrated delivery systems.”  It follows that tomorrow’s PCPs will be educated into a brave new world of “shared competencies” and interprofessional collaboration, both summoning “the integrative power of health information technology as the basis of preparation.”[6]

When this daunting skill set is enlarged still further by curricula addressing prevention and health promotion, wellness and “life balance” counseling, patient self-management for chronic disease, and strategies for engaging patients in all manner of decision-making, we end up with new-style primary care physicians who look like information-age reincarnations of the “holistic” mind-body family practitioners of the 1970s. What exactly will be dropped from existing medical school curricula and residency training programs to make room for acquisition of these new skill sets remains unaddressed.

I have nothing against prevention, health promotion, wellness, “life balance” counseling, and the like. Three cheers for all of them – and for patient-centered care and shared decision-making as well.  But I think health policy experts and medical academics have taken to theorizing about such matters – and the information-age skill sets they fall back on – in an existential vacuum, as if “new competencies in patient engagement and coaching”[7] can be taught didactically as opposed to being earned in the relational fulcrum of clinical encounter.  “Tracking and assisting patients as they move across care settings,” “coordinating services with other providers,” providing wellness counseling, teaching self-management strategies, and the like – all these things finally fall back on a trusting doctor-patient relationship.  In study after study, patient trust, a product of empathic doctoring,  has been linked to issues of compliance, subjective well-being, and treatment outcome.  Absent such trust, information-age “competencies” will have limited impact; they will briefly blossom but not take root in transformative ways.

I suggest we attend to first matters first.  We must fortify patient trust by training primary care doctors to do more, procedurally speaking, and then reward them for caring for underserved Americans who urgently need to have more done for them.  The rest – the tracking, assisting, coordinating, and counseling – will follow.  And the patient-centered medical home of the future will have patient educators, physician assistants, nurse practitioners, and social workers to absorb physicians’ counseling functions, just as it will have practice managers and care coordinators to guide physicians through the thicket of intertwining  information technologies.  We still have much to learn from Marcus Welby – and William Stepansky – on the community-sustaining art of barn-raising and especially the difference between barns well and poorly raised.


[1] Quoted from “Who Will Provide Primary Care And How Will They Be Trained?”  Proceedings of a conference chaired by L. Cronenwett & V. J. Dzau, transcript edited by B. J. Culliton & S. Russell (NY:  Josiah Macy, Jr., Foundation, 2010), p. 148.

[2] The prerogative to develop specialized knowledge and treatment skills within certain areas has always been part of general practice, and it was explicitly recommended in the Report of the AMA Ad Hoc Committee on Education for Family Practice (the Willard Committee) of 1966 that paved the way for establishment of the American Board of Family Practice in 1969.  See N.A., Family Practice: Creation of a Specialty (American Academy of Family Physicians, 1980), p.  41.

[3] C. G. Morris & F. M. Chen, “Training residents in community health centers:  facilitators and barriers,” Ann. Fam. Med., 7:488-94, 2009; C. G. Morris, et al., “Training family physicians in community health centers,” Fam. Med., 40:271-6, 2008; E. M. Mazur, et al., “Collaboration between an internal medicine residency program and a federally qualified health center: Norwalk hospital and the Norwalk community health center,” Acad. Med., 76: 1159-64, 2001.

[5] “Specialty and geographic distribution of the physician workforce:  What influences medical student & resident choices?”  A publication of the Robert Graham Center, funded by the Josiah Macy, Jr. Foundation (2009), pp. 5, 47; “Who Will Provide Primary Care And How Will They Be Trained” (n. 1), p. 140.

[6] “Who Will Provide Primary Care And How Will They Be Trained”(n. 1), pp. 147, 148.

[7] Ibid, p. 151.

Copyright © 2013 by Paul E. Stepansky.  All rights reserved.

Re-Visioning Primary Care

Existing approaches to the looming crisis of primary care are like Congressional approaches to our fiscal crisis.  They have been, and will continue to be, unavailing because they shy away from structural change that would promote equity.  I suggest the time has come to think outside the financial box of subsidization and loan repayment for medical students and residents who agree to serve the medically underserved for a few years.  Here are my propositions and proposals:

  1. We should redefine “primary care” in a way that gives primary care physicians (PCPs) a fighting chance of actually functioning as specialists. This means eliminating “family medicine” altogether.  The effort to make the family physician (FP) (until 2003, the “family practitioner”) a specialist among specialists was tried in the 70s and by and largely failed – not for FP patients, certainly, but for FPs themselves, who, by most accounts, failed to achieve the academic stature and clinical privileges associated with specialist standing.  It is time to face this hard fact and acknowledge that the era of modern general practice/family medicine, as it took shape in the 1940s and came to fruition in the quarter century following World War II, is at an end.  Yet another round of financial incentives that make it easier for medical students and residents to “specialize” in family medicine will fail.  “Making it easier” will not make it easy enough, nor will it overcome a specialist mentality that has been entrenched since the 1950s.  Further policy-related efforts to increase the tenability of family medicine, such as increasing Medicare reimbursement for primary care services or restructuring Medicare to do away with primary care billing costs, will be socioeconomic Band-Aids that cover over the professional, personal, familial, and, yes, financial strains associated with family medicine in the twenty-first century.  Vague and unenforceable “mandates” by state legislatures directing public medical schools to “produce” more primary care physicians have been, and will continue to be, political Band-Aids.[1]
  2. As a society, we must re-vision generalist practice as the province of internists and pediatricians.  We must focus on developing incentives that encourage internists and pediatricians to practice general internal medicine and general pediatrics, respectively.  This reconfiguring of primary care medicine will help advance the “specialty” claims of primary care physicians.  Historically speaking, internal medicine and pediatrics are specialties, and the decision-making authority and case management prerogatives of internists and pediatricians are, in many locales, still those of specialists. General internists become “chief medical officers” of their hospitals; family physicians, with very rare exceptions, do not.  For a host of pragmatic and ideological reasons, many more American medical students at this juncture in medical history will enter primary care as internists and pediatricians than as family physicians.
  3. Part of this re-visioning and reconfiguring must entail recognition that generalist values are not synonymous with generalist practice.  Generalist values can be cultivated (or neglected) in any type of postgraduate medical training and implemented (or neglected) by physicians in any specialty. There are caring physicians among specialists, just as there are less-than-caring primary care physicians aplenty.  Caring physicians make caring interventions, however narrow their gaze.  My wonderfully caring dentist only observes the inside of my mouth but he is no less concerned with my well-being on account of it.  The claim of G. Gayle Stephens, one of the founders of the family practice specialty in the late 1960s, that internists, as a class, were zealous scientists committed to “a mechanistic and flawed concept of disease,” whereas family physicians, as a class, were humanistic, psychosocially embedded caregivers, was specious then and now.[2]  General internists are primary care physicians, and they can be expected to be no less caring (and, sadly, no more caring) of their patients than family physicians.  This is truer still of general pediatrics, which, as far back as the late nineteenth century, provided a decidedly patient-centered agenda for a cohort of gifted researcher-clinicians, many women physicians among them, whose growth as specialists (and, by the 1920s and 30s, as pediatric subspecialists) went hand-in-hand with an abiding commitment to the “whole patient.”[3]
  4. We will not remedy the primary care crisis by eliminating family medicine and developing incentives to keep internists and pediatricians in the “general practice” of their specialties.  In addition, we need policy initiatives to encourage subspecialized internists and subspecialized pediatricians to continue to work as generalists.  This has proven a workable solution in many developed countries, where the provision of primary care by specialists is a long-established norm.[4]   And, in point of fact, it has long been a de facto reality in many smaller American communities, where medical and pediatric subspecialists in cardiology, gastroenterology, endocrinology, et al. also practice general internal medicine and general pediatrics.  Perhaps we need a new kind of mandate:  that board-certified internists and pediatricians practice general internal medicine and general pediatrics, respectively, for a stipulated period (say, two years) before beginning their subspecialty fellowships.

Can we remedy the shortage of primary care physicians through the conduits of internal medicine and pediatrics?  No, absolutely not.  Even if incentive programs and mandates increase the percentage of internists and pediatricians who practice primary care, they will hardly provide the 44,000-53,000 new primary care physicians we will need by 2025.[5]  Nor will an increase in the percentage of medical students who choose primary care pull these new providers to the underserved communities where they are desperately needed.  There is little evidence that increasing the supply of primary care physicians affects (mal)distribution of those providers across the country.  Twenty percent of the American population lives in nonmetropolitan areas and is currently served by 9% of the nation’s physicians; over one third of these rural Americans live in what the Health Resources and Services Administration of the U.S. Department of Health and Human Services designates “Health Professional Shortage Areas” (HPSAs) in need of primary medical care.[6]  Efforts to induce foreign-trained physicians to serve these communities by offering them J-1 visa waivers have barely made a dent in the problem and represent an unconscionable “brain drain” of the medical resources of developing countries.[7]  The hope that expansion of rural medicine training programs at U.S. medical schools, taken in conjunction with increased medical school enrollement, could meet the need for thousands of new rural PCPs is not being borne out.  Graduating rural primary care physicians has not been, and likely will not be, a high priority for most American medical schools, a reality acknowledged by proponents of rural medicine programs.[8]

Over and against the admirable but ill-fated initiatives on the table, I propose two focal strategies for addressing the primary care crisis as a crisis of uneven distribution of medical services across the population:

  1. We must expend political capital and economic resources to encourage people to become mid-level providers, i.e., physician’s assistants (PAs) and nurse practitioners (NPs), and then develop incentives to keep them in primary care.  This need is more pressing than ever given (a) evidence that mid-level practitioners are more likely to remain in underserved areas than physicians,[9] and (b) the key role of mid-level providers in the team delivery systems, such as  Accountable Care Organizations and Patient-Centered Medical Homes, promoted by the Patient Protection and Affordable Care Act of 2010.  Unlike other health care providers, PAs change specialties over the course of their careers without additional training, and since the late 1990s, more PAs have left family medicine than have entered it.  It has become incumbent on us as a society to follow the lead of the armed forces and the Veterans Health Administration in exploiting this health care resource.[10]  To wit, (a) we must provide incentives to attract newly graduated PAs to primary care in underserved communities and to pull specialty-changing “journeyman PAs” back to primary care,[11] and (b) we must ease the path of military medics and corpsmen returning from Iraq and Afghanistan into PA programs by waiving college-degree eligibility requirements that have all but driven them away from these programs.[12]  Although the Physician Assistant profession came into existence in the mid-1960s to capitalize on the skill set and experience of medical corpsman returning from Vietnam, contemporary PA programs, with few exceptions, no longer recruit military veterans into their programs.[13]
  2. Finally, and most controversially, we need a new primary care specialty aimed at providing comprehensive care to rural and underserved communities.  I designate this new specialty Procedural Rural Medicine (PRM) and envision it as the most demanding – and potentially most rewarding – primary care specialty.  PRM would borrow and enlarge the recruitment strategies employed by the handful of medical schools with rural medicine training programs.[14]  But it would require a training curriculum, a residency program, and a broad system of incentives all its own.

In the next installment of this series, I will elaborate my vision of Procedural Rural Medicine and explain how and why it differs from family medicine as it currently exists.


[1] D. Hogberg, “The Next Exodus: Primary-Care Physicians and Medicare,” National Policy Analysis #640 (http://www.nationalcenter.org/NPA640.html); C S. Weissert & S. L. Silberman, “Sending a policy signal: state legislatures, medical schools, and primary care mandates,” Journal of Health Politics, Policy and Law, 23:743-770, 1998.

[2] G. G. Stephens, The Intellectual Basis of Family Practice (Tucson, AZ: Winter Publishing, 1982), pp. 77, 96.

[3] See E. S. More, Restoring the Balance: Women Physicians and the Profession of Medicine, 1850-1995 (Cambridge: Harvard University Press, 1999), pp. 170-72.  Edith Dunham, Martha Eliot, Helen Taussig, Edith Banfield Jackson, and Virginia Apgar stand out among the pioneer pediatricians who were true generalist-specialists.

[4] See W. J. Stephen, An Analysis of Primary Care: An International Study (Cambridge: Cambridge University Press, 1979) and B. S. Starfield, Primary Care: Concept, Evaluation and Policy (Oxford : Oxford University Press, 1992).

[5] The percentile range denotes the different protocols employed by researchers.  See M. J. Dill & E. S. Salsberg, “The complexities of physician supply and demand: projections through 2025,” Association of American Medical College, 2008 (http://www.innovationlabs.com/pa_future/1/background_docs/AAMC%20Complexities%20of%20physician%20demand,%202008.pdf); J. M. Colwill, et al., Will generalist physician supply meet demands of an increasing and aging population?  Health Affairs, 27:w232-w241, 2008;  and S. M. Petterson, et al., “Projecting US primary care physician workforce needs:  2010-2025,” Ann. Fam. Med., 10: 503-509, 2012.

[6] See the Federal Office of Rural Health Policy, “Facts about . . . rural physicians” (http://www.shepscenter.unc.edu/rural/pubs/finding_brief/phy.html ) and J. D. Gazewood, et al., “Beyond the horizon: the role of academic health centers in improving the health of rural communities,” Acad. Med., 81:793-797, 2006.  In all, the federal government has designated 5,848 geographical areas HPSAs in need of primary medical care (http://datawarehouse.hrsa.gov/factSheetNation.aspx).

[7] These non-immigrant visa waivers, authorized since 1994 by the Physicians for Underserved Areas Act (the “Conrad State 30” Program), allow foreign-trained physicians who provide primary care in underserved communities for at least three years to waive the two-hear home residence requirement.  That is, these physicians do not have to return to their native countries for at least two years prior to applying for permanent residence or an immigration visa.  On the negative impact of this program on health equity and, inter alia, the global fight against HIV and AIDS, see V. Patel, “Recruiting doctors from poor countries: the great brain robbery?, BMJ, 327:926-928, 2003; F. Mullan, “The metrics of the physician brain drain,” New Engl. J. Med., 353:1810-1818, 2005; and N. Eyal & S. A. Hurst, “Physician brain drain:  can nothing be done?, Public Health Ethics, 1:180-192, 2008.

[8] See H. K. Rabinowitz, et al., “Medical school programs to increase the rural physician supply: a systematic review,” Acad. Med., 83:235-243, at 242:  “It is, therefore, unlikely that the graduation of rural physicians will be a high priority for most medical schools, unless specific regulations require this, or unless adequate financial resources are provided as incentives to support this mission.”

[9] U. Lehmann, “Mid-level health workers: the state of evidence on programmes, activities, costs and impact on health outcomes,” World Health Organization, 2008 (http://www.who.int/hrh/MLHW_review_2008.pdf).

[10] R. S. Hooker, “Federally employed physician assistants,” Mil. Med., 173:895-899, 2008.

[11] J. F. Cawley & R. S. Hooker, “Physician assistant role flexibility and career mobility,” JAAPA, 23:10, 2010.

[12] D. M. Brock, et al., “The physician assistant profession and military veterans,” Mil. Med., 176:197-203, 2011.

[13] N. Holt, “’Confusion’s masterpiece’:  the development of the physician assistant profession,” Bull. Hist. Med., 72:246-278, 1998; Brock, op cit., p. 197.

[14]H. K. Rabinowitz, et al., “Critical factors  for designing programs to increase the supply and retention of rural primary care physicians,” JAMA, 286:1041-48, 2001; H. K. Rabinowitz, et al., “The relationship between entering medical students’ backgrounds and career plans and their rural practice outcomes three decades later,” Acad. Med., 87:493-497, 2012; H. K. Rabinowitz, et al., “The relationship between matriculating medical students’ planned specialties and eventual rural practice outcomes,” Acad. Med., 87:1086-1090, 2012.

Copyright © 2013 by Paul E. Stepansky.  All rights reserved.

Wanted: Primary Care Docs

“It will readily be seen that amid all these claimants for pathological territory there is scarcely standing-room left for the general practitioner.” – Andrew H. Smith, “The Family Physician (1888)

“The time when every family,   v         rich or poor, had its own family physician, who knew the illnesses and health of its members and enjoyed the confidence of the upgrowing boys and girls during two or three generations, is gone.” – Abraham Jacobi, “Commercialized Medicine” (1910)

“More recent investigation shows that almost one-third of the towns of 1,000 or less throughout the United States which had physicians in 1914 had none in 1925. . . . it will be seen at a glance that the present generation of country doctors will have practically disappeared in another ten years.” – A. F. van Bibber, “The Swan Song of the Country Doctor” (1929)

“But complete medical care means more than the sum of the services provided by specialists, no matter how highly qualified.  It must include acceptance by one doctor of complete responsibility for the care of the patient and for the coordination of specialist, laboratory, and other services.  Within a generation, if the present situation continues, few Americans will have a personal physician do this for them.” – David D. Rutstein, “Do You Really Want a Family Doctor?” (1960)

“Whoever takes up the cause of primary care, one thing is clear: action is needed to calm the brewing storm before the levees break.” – Thomas Bodenheimer, “Primary Care – Will It Survive?” (2006)

“Potential access challenges”—that’s the current way of putting the growing shortage of primary care physicians (PCPs).  Euphemism melodious of care incommodious. Aggravated by the 33 million Americans shortly to receive health insurance through the Patient Protection and Affordable Care Act of 2010 – health insurance leads to increased use of physicians – the chronic shortage of primary care physicians is seen as a looming crisis capable of dragging us back into the medical dark ages.  Medical school graduates continue to veer away from the less remunerative primary care specialties, opting for the  well-fertilized and debt-annihilating verdure of the subspecialties.  Where then will we find the 51,880 additional primary care physicians that, according to the most recent published projections,[1] we will need by 2025 to keep up with an expanding, aging, and more universally insured American population?

Dire forecasts about the imminent disappearance of general practitioners or family practitioners or, more recently, primary care physicians have been part of the medical-cum-political landscape for more than a century.  Now the bleak scenarios are back in vogue, and they are more frightening than ever, foretelling a consumer purgatory of lengthy visits to emergency rooms for private primary care – or worse.  Dr. Lee Green, chair of Family Medicine at the University of Alberta, offers this bleak vision of a near future where patients are barely able to see primary care physicians at all:

Primary care will be past saturated with wait times longer and will not accept any new patients.  There will be an increase in hospitalizations and increase in death rates for basic preventable things like hypertension that was not managed adequately.[2]

I have no intention of minimizing the urgency of a problem that, by all measurable indices, has grown worse in recent decades. But I do think that Dr. Green’s vision is, shall we say, over the top.  It is premised on a traditional model of primary care in which a single physician assumes responsibility for a single patient.  As soon as we look past the traditional model and take into account structural changes in the provision of primary care over the past four decades, we are able to forecast a different, if still troubling, future.

Beginning in the 1970s, and picking up steam in the 1980s and 90s, primary care medicine was enlarged by mid-level providers (physician assistants, nurse practitioners, psychiatric nurses, and clinical social workers) who, in many locales, have absorbed the traditional functions of primary care physicians.  The role of these providers in American health care will only increase with implementation of the Patient Protection and Affordable Care Act and the innovative health delivery systems it promotes as solutions to the crisis in healthcare.

I refer specifically to the Act’s promotion of “Patient-Centered Medical Homes” (PCMHs) and “Accountable Care Organizations” (ACOs), both of which involve a collaborative melding of roles in the provision of primary care.  Both delivery systems seek to tilt the demographic and economic balance among medical providers back in the direction of primary care and, in the process, to render medical care more cost-effective through the use of electronic information systems, evidence-based care (especially the population-based management of chronic illnesses), and performance measurement and improvement.  To these ends, the new delivery systems equate primary care with “team-based care, in which physicians share responsibility with nurses, care coordinators, patient educators, clinical pharmacists, social workers, behavioral health specialists, and other team members.”[3]  The degree to which the overarching goals of these new models – reduced hospital admissions and readmissions and more integrated, cost-effective management of chronic illnesses – can be achieved will be seen in the years ahead.  But it is clear that these developments, propelled by the Accountable Care Act and the Obama administration’s investment of $19 billion to stimulate the use of information technology in medical practice, all point to the diminished role of the all-purpose primary care physician (PCP).

So we are entering a brave new world in which mid-level providers, all working under the supervision of generalist physicians in ever larger health systems, will assume an increasing role in primary care.  Indeed, PCMHs and ACOs, which attempt to redress the crisis in primary care, will probably have the paradoxical effect of relegating the traditional “caring” aspects of the doctor-patient relationship to nonphysician members of the health care team.  The trend away from patient-centered care on the part of physicians is already discernible in the technical quality objectives (like mammography rates) and financial goals of ACOs that increasingly pull primary care physicians away from relational caregiving.

The culprit here is time.  ACOs, for example, may direct PCPs to administer depression scales and fall risk assessments to all Medicare patients, the results of which must be recorded in the electronic record along with any “intervention” initiated.  In all but the largest health systems (think Kaiser Permanente), such tasks currently fall to the physician him- or herself.  The new delivery systems do not provide ancillary help for such tasks, which makes it harder still for overtaxed PCPs to keep on schedule and connect with their patients in more human, and less assessment-driven, ways.[4]

So, yes, we’re going to need many more primary care physicians, but perhaps not as many as Petterson and his colleagues project.  Their extrapolations from “utilization data” – the number of  PCPs we will will need to accommodate the number of office visits made by a growing, aging, and better insured American population at a future point in time – do not incorporate the growing reality of team-administered primary care.  The latter already includes patient visits to physician assistants, nurse practitioners, and clinical social workers and is poised to include electronic office “visits” via the Internet.   For health services researchers, this kind of  distributed care suggests the reasonableness of equating “continuity of care” with “site continuity” (the place where we receive care) rather than “provider continuity” (the personal physician who provides that care).

Of course, we are still left with the massive and to date intractable problem of the uneven distribution of primary care physicians (or primary care “teams”) across the population.  Since the 1990s, attempts to pull PCPs to those areas where they are most needed have concentrated on the well-documented financial disincentives associated with primary care, especially in underserved, mainly rural areas .  Unsurprisingly, these disincentives evoke financial solutions for newly trained physicians who agree to practice primary care for at least a few years in what the federal government’s Health Resources and Services Administration designates “Health Professional Shortage Areas” (HPSAs).  The benefit package currently in place includes medical school scholarships, loan repayment plans, and, beginning in 1987, a modest bonus payment program administered by Medicare Part B carriers.[5]

The most recent and elaborate proposal to persuade primary care physicians to go where they are most needed adopts a two-pronged approach.  It calls for creation of a National Residency Exchange that would determine the optimal number of  residencies in different medical specialties for each state, and then “optimally redistribute”  residency assignments state by state in the direction of underrepresented specialties, especially primary care specialties in underserved communities.  This would be teamed with a federally funded primary care loan repayment program, administered by Medicare, that would gradually repay participants’ loans over the course of their first eight years of post-residency primary care practice in an HPSA.[6]

But this and like-minded schemes will come to naught if medical students are not drawn to primary care medicine in the first place.  There was such a “draw” in the late 60s and early 70s; it followed the creation of “family practice” as a residency-based specialty and developed in tandem with social activist movements of the period.  But it did not last into the 80s and left many of its proponents disillusioned.  Despite the financial incentives already in place (including those provided by the federal government’s National Health Service Corps[7]) and the existence of “rural medicine” training programs,[8] there is no sense of gathering social forces that will pull a new generation of medical students into primary care.  Nor is there any reason to suppose that the dwindling number of medical students whose sense of calling leads to careers among the underserved will be drawn to the emerging world of primary care in which the PCP assumes an increasingly administrative (and data-driven) role as coordinator of a health care team.

In truth, I am skeptical that financial packages, even if greatly enlarged, can overcome the specialist mentality that emerged after World War II and is long-entrenched in American medicine.  Financial incentives assume that medical students would opt for primary care if not for financial disincentives that make it harder for them to do so.  Now recent literature suggests that financial realities do play an important role in the choice of specialty.[9]  But there is more to choice of specialty than debt management and long-term earning power.  Specialism is not simply a veering away from generalism; it is a pathway to medicine with its own intrinsic satisfactions, among which are prestige, authority, procedural competence, problem-solving acuity, and considerations of lifestyle. These satisfactions are at present vastly greater in specialty medicine than those inhering in primary care.  This is why primary care educators, health economists, and policy makers place us (yet again) on the brink of crisis.

Financial incentives associated with primary care are important and probably need to be enlarged far beyond the status quo.  But at the same time, we need to think outside the box in a number of ways.  To wit, we need to rethink the meaning of generalism and its role in medical practice (including specialty practice).  And we need to find and nurture (and financially support) more medical students who are drawn to primary care.  And finally, and perhaps most radically, we need to rethink the three current primary care specialties (pediatrics, general internal medicine, and family medicine) and the relationships among them.  Perhaps this long-established tripartite division is no longer the best way to conceptualize primary care and to draw a larger percentage of medical students to it.  I will offer my thoughts on these knotty issues in blog essays to follow.


[1] S. M. Petterson, et al., “Projecting US primary care physician workforce needs:  2010-2025,” Ann. Fam. Med., 10:503-509, 2012.

[2] Quoted in Nisha Nathan, “Doc Shortage Could Crash Health Care,” online at http://abcnews.go.com/Health/doctor-shortage-healthcare-crash/story?id=17708473.

[3] D. R. Rittenhouse & S. M. Shortell, “The patient-centered medical home:  will it stand the test of health reform?, JAMA, 301:22038-2040, 2009, at 2039.  Among recent commentaries, see further D. M. Berwick, “making good on ACOs’ promise – the final rule for the Medicare shared savings program,” New Engl. J. Med., 365:1753-1756, 2011; D. R. Rittenhouse, et al., “Primary care and accountable care – two essential elements of delivery-system reform,” New Engl. J. Med., 361:2301-2303, 2009, and E. Carrier, et al., “Medical homes:  challenges in translating theory into practice,” Med. Care, 47:714-722, 2009.

[4] I am grateful to my brother, David Stepansky, M.D., whose medical group participates in both PCMH and ACO entities, for these insights on the impact of participation on PCPs who are not part of relatively large health  systems.

[5]E.g., R. G. Petersdorf, “Financing medical education: a universal ‘Berry plan’  for medical students,” New Engl. J. Med., 328, 651, 1993;  K. M. Byrnes, “Is there a primary care doctor in the house? the legislation needed to address a national shortage,” Rutgers Law Journal, 25: 799, 806-808, 1994.  On the Medicare Incentive Payment Program for physicians practicing in designated HPSAs – and the inadequacy  of the 10% bonus system now in place – see L. R. Shugarman & D. O. Farley, “Shortcomings in Medicare bonus payments for physicians in underserved areas,” Health Affairs, 22:173-78, 2003 at 177 (online at http://content.healthaffairs.org/content/22/4/173.full.pdf+html) and S. Gunselman, “The Conrad ‘state-30’ program:  a temporary relief to the U.S. shortage of physicians or a contributor to the brain drain,”  J. Health & Biomed. Law, 5:91-115, 2009, at 107-108.

[6]G. Cheng, “The national residency exchange: a proposal to restore primary care in an age of microspecialization,” Amer. J. Law & Med., 38:158-195, 2012.

[7] The NHSC, founded in 1970, provides full scholarship support for medical students who agree to serve as PCPs in high-need, underserved locales, with one year of service for each year of support provided by the government.  For medical school graduates who have already accrued debt, the program provides student loan payment for physicians who commit to at least two years of service at an approved site. Descriptions of the scholarship and loan repayment program are available at http://nhsc.hrsa.gov/

[8] See the rationale for rural training programs set forth in a document of the Association of American Medical Colleges, “Rural medicine programs aim to reverse physician shortage in outlying regions,” online at http://www.aamc.org/newsroom/reporter/nov04/rural.htm.  One of the best such programs, Jefferson Medical College’s Physician Shortage Area Program, is described and its graduates profiled in H. K. Rabinowitz, Caring for the country:  family doctors in small rural towns (NY: Springer, 2004).

[9] See especially the 2003 white paper by the AMA’s taskforce on student debt, online at http://www.ama-assn.org/ama1/pub/upload/mm/15/debt_report.pdf and, more recently, P. A. Pugno, et al., “Results of the 2009 national resident matching program: family medicine,” Fam. Med., 41:567-577, 2009 and H. S. Teitelbaum, et al., “Factors affecting specialty choice among osteopathic medical students, Acad. Med., 84:718-723, 2009.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

The Costs of Medical Progress

When historians of medicine introduce students to the transformation of acute, life-threatening, often terminal illness into long-term, manageable, chronic illness – a major aspect of 20th-century medicine – they immediately turn to diabetes.  There is Diabetes B.I. (diabetes before insulin) and diabetes in the Common Era, i.e., Diabetes A.I. (diabetes after insulin).  Before Frederick Banting, who knew next to nothing about the complex pathophysiology of diabetes, isolated insulin in his Toronto laboratory in 1922, juvenile diabetes was a death sentence; its young victims were consigned to starvation diets and early deaths.  Now, in the Common Era, young diabetics grow into mature diabetics and type II diabetics live to become old diabetics.  Life-long management of what has become a chronic disease will take them through a dizzying array of testing supplies, meters, pumps, and short- and long-term insulins.  It will also put them at risk for the onerous sequelae of long-term diabetes:  kidney failure, neuropathy, retinopathy, and amputation of lower extremities.  Of course all the associated conditions of adult diabetes can be managed more or less well, with their own technologically driven treatments (e.g., hemodialysis for kidney failure) and long-term medications.

The chronicity of diabetes is both a blessing and curse.  Chris Feudtner, the author of the outstanding study of its transformation, characterizes it as a “cyclical transmuted disease” that no longer has a stable “natural” history. “Defying any simple synopsis,” he writes, “the metamorphosis of diabetes wrought by insulin, like a Greek myth of rebirth turned ironic and macabre, has led patients to fates both blessed and baleful.”[1]  He simply means that what he terms the “miraculous therapy” of insulin only prolongs life at the expense of serious long-term problems that did not exist, that could not exist, before the availability of insulin.  So depending on the patient, insulin signifies a partial victory or a foredoomed victory, but even in the best of cases, to borrow the title of Feudtner’s book, a victory that is “bittersweet.”

It is the same story whenever new technologies and new medications override an otherwise grim prognosis.  Beginning in the early 1930s, we put polio patients (many of whom were kids) with paralyzed intercostal muscles of the diaphragm into the newly invented Iron Lung.[2]  The machine’s electrically driven blowers created negative pressure inside the tank that made the kids breathe.  They could relax and stop struggling for air, though they required intensive, around-the-clock nursing care.[3]  Many survived but spent months or years, occasionally even lifetimes, in Iron Lungs.  Most regained enough lung capacity to leave their steel tombs (or were they nurturing wombs?) and graduated to a panoply of mechanical polio aids: wheelchairs, braces, and crutches galore.  An industry of rehab facilities (like FDR’s fabled Warm Springs Resort in Georgia) sprouted up to help patients regain as much function as possible.

Beginning in 1941, the National Foundation for Infantile Paralysis (NFIP), founded by FDR and his friend Basil O’Connor in 1937, footed the bill for the manufacture of Iron Lungs and then distributed them via regional centers to communities where they were needed.   The Lungs, it turned out, were foundation-affordable devices, and it was unseemly, even Un-American, to worry about the cost of hospitalization and nursing care for the predominantly young, middle-class white patients who temporarily resided in them, still less about the costs of post-Iron Lung mechanical appliances and rehab personnel that helped get them back on their feet.[4]  To be sure, African American polio victims were unwelcome at tony resort-like facilities like Warm Springs, but the NFIP, awash in largesse, made a grant of $161,350 to Tuskegee Institute’s Hospital so that it could build and equip its own 35-bed “infantile paralysis center for Negroes.”[5]

Things got financially dicey for the NFIP only when Iron Lung success stories, disseminated through print media, led to overuse.  Parents read the stories and implored doctors to give their stricken children the benefit of this life-saving invention – even when their children had a form of polio (usually bulbar polio) in the face of which the mechanical marvel was useless.  And what pediatrician, moved by the desperation of loving parents beholding a child gasping for breath, would deny them the small peace afforded by use of the machine and the around-the-clock nursing care it entailed?

The cost of medical progress is rarely the cost of this or that technology for this or that disease.  No, the cost corresponds to cascading “chronicities” that pull multiple technologies and treatment regimens into one gigantic flow.  We see this development clearly in the development and refinement of hemodialysis for kidney failure.  Dialysis machines only became life-extenders in 1960, when Belding Scribner, working at the University of Washington Medical School, perfected the design of a surgically implanted Teflon cannula and  shunt through which the machine’s tubing could be attached, week after week, month after month, year after year.  But throughout the 60s, dialysis machines were in such short supply that treatment had to be rationed:  Local medical societies and medical centers formed “Who Shall Live” committees to decide who would receive dialysis and who not.  Public uproar followed, fanned by the newly formed National Association of Patients on Hemodialysis, most of whose members, be it noted, were white, educated, professional men.

In 1972, Congress responded to the pressure and decided to fund all treatment for end-stage renal disease (ESRD) through Section 2991 of the Social Security Act.  Dialysis, after all, was envisioned as long-term treatment for only a handful of appropriate patients, and in 1973 only 10,000 people received the treatment at a government cost of $229 million.  But things did not go as planned.  In 1990, the 10,000 had grown to 150,000 and their treatment cost the government $3 billion.  And in 2011, the 150,000 had grown to 400,000 people and drained the Social Security Fund of $20 billion.

What happened?  Medical progress happened.  Dialysis technology was not static; it was refined and became available to sicker, more debilitated patients who encompassed an ever-broadening socioeconomic swath of the population with ESRD.  Improved cardiac care, drawing on its own innovative technologies, enabled cardiac patients to live long enough to go into kidney failure and receive dialysis.  Ditto for diabetes, where improved long-term management extended the diabetic lifespan to the stage of kidney failure and dialysis.  The result:  Dialysis became mainstream and its costs  spiraled onward and upward.  A second booster engine propelled dialysis-related healthcare costs still higher, as ESRD patients now lived long enough to become cardiac patients and/or insulin-dependent diabetics, with the costs attendant to managing those chronic conditions.

With the shift to chronic disease, the historian Charles Rosenberg has observed, “we no longer die of old age but of a chronic disease that has been managed for years or decades and runs its course.”[6] To which I add a critical proviso:  Chronic disease rarely runs its course in glorious pathophysiological isolation.  All but inevitably, it pulls other chronic diseases into the running.  Newly emergent chronic disease is collateral damage attendant to chronic disease long-established and well-managed.  Chronicities cluster; discrete treatment technologies leach together; medication needs multiply.

This claim does not minimize the inordinate impact – physical, emotional, and financial – of a single disease.  Look at AIDS/HIV, a “single” entity that brings into its orbit all the derivative illnesses associated with “wasting disease.”  But the larger historical dynamic is at work even with AIDS.  If you live with the retrovirus, you are at much greater risk of contracting TB, since the very immune cells destroyed by the virus enable the body to fight the TB bacterium.  So we behold a resurgence of TB, especially in developing nations, because of HIV infection.[7]  And because AIDS/HIV is increasingly a chronic condition, we need to treat disproportionate numbers of HIV-infected patients for TB.  They have become AIDS/HIV patients and TB patients.  Worldwide, TB is the leading cause of death among persons with HIV infection.

Here in microcosm is one aspect of our health care crisis.  Viewed historically, it is a crisis of success that corresponds to a superabundance of long-term multi-disease management tools and ever-increasing clinical skill in devising and implementing complicated multidrug regimens.  We cannot escape the crisis brought on by these developments, nor should we want to.  The crisis, after all, is the financial result of a century and a half of life-extending medical progress.  We cannot go backwards.  How then do we go forward?  The key rests in the qualifier one aspect.  American health care is organismic; it is  a huge octopus with specialized tentacles that simultaneously sustain and toxify different levels of the system.  To remediate the financial crisis we must range across these levels in search of more radical systemic solutions.


[1]C. Feudtner, Bittersweet: Diabetes, Insulin, and the Transformation of Illness (Chapel Hill: University of North Carolina Press, 2003), p. 36.

[2] My remarks on the development and impact of the Iron Lung and homodialysis, respectively, lean on D. J. Rothman, Beginnings Count: The Technological Imperative in American Health Care (NY: Oxford University Press, 1997). For an unsettling account of the historical circumstances and market forces that have undermined the promise of dialysis in America, see Robin Fields, “’God help you. You’re on dialysis’,” The Atlantic, 306:82-92, December, 2010. The article is online at   http://www.theatlantic.com/magazine/archive/2010/12/-8220-god-help-you-you-39-re-on-dialysis-8221/8308/.

[3] L. M. Dunphy, “’The Steel Cocoon’: Tales of the Nurses and Patients of the Iron Lung, 1929-1955,” Nursing History Review, 9:3-33, 2001.

[4] D. J. Wilson, “Braces, Wheelchairs, and Iron Lungs: The Paralyzed Body and the Machinery of Rehabilitation in the Polio Epidemics,” Journal of Medical Humanities, 26:173-190, 2005.

[5] See S. E. Mawdsley, “’Dancing on Eggs’: Charles H. Bynum, Racial Politics, and the National Foundation for Infantile Paralysis, 1938-1954,” Bull. Hist. Med., 84:217-247, 2010.

[6] C. Rosenberg, “The Art of Medicine: Managed Fear,” Lancet, 373:802-803, 2009.  Quoted at p. 803.

[7] F. Ryan, The Forgotten Plague: How the Battle Against Tuberculosis was Won and Lost  (Boston:  Little, Brown, 1992), pp. 395-398, 401, 417.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

Medical Freedom, Then and Now

“A nation’s liberties seem to depend upon headier and heartier attributes than the liberty to die without medical care.”                                                                        ~Milton Mayer, “The Dogged Retreat of the Doctors” (1949)

Conservative supreme court justices who voice grave skepticism about the constitutionality of the Patient Protection and Affordable Care Act of 2010 would have been better suited to judicial service in the decades following the Revolutionary War.  Issues of health, illness, freedom, and tyranny were much simpler then.  Liberty, as understood by our founding fathers, operated only in the interlacing realms of politics and religion.  How could it have been otherwise?   Medical intervention did not affect the course of illness; it did not enable people to feel better and live longer and more productive lives.  With the exception of smallpox inoculation, which George Washington made mandatory among colonial troops in the winter of 1777, governmental intrusion into the health of its citizenry was nonexistent, even nonsensical.

Until roughly the eighth decade of the nineteenth century, you got sick, you recovered (often despite doctoring), you lingered on in sickness, or you died.  Antebellum (pre-Civil War) medicine relied on a variation of Galenic medicine developed in the eighteenth century by the Scottish physician John Cullen and his student John Brown.  According to Cullen’s system, all diseases were really variations of a single disease that consisted of too much tension or excitability (and secondarily too little tension or excitability) in the blood vessels.  Revolutionary-era and antebellum physicians sought to restore a natural balance by giving “overstimulated” patients (read: feverish, agitated, pain-ridden patients) large doses of toxic mercury compounds like calomel to induce diarrhea; emetics like ipecac and tobacco to induce vomiting; and by bleeding patients to the point of fainting (i.e., syncope).  It was not a pretty business.

Antebellum Americans did not have to worry about remedies for specific illnesses.  Except for smallpox vaccine and antimalarial cinchona tree bark (from which quinine was isolated in 1820), none existed.  Nor did they have to worry about long-term medical interventions for chronic conditions – bacterial infections, especially those that came in epidemic waves every two or three years, had no more opportunity to become chronic than diabetes, heart disease, or cancer.

Medical liberty, enshrined during the Jacksonian era, meant being free to pick and choose your doctor without any state interference.  So liberty-loving Americans picked and chose among calomel-dosing, bloodletting-to-syncope “regulars,” homeopaths, herbalists, botanical practitioners (Thomsonians), eclectics, hydropaths, phrenologists, Christian Scientists, folk healers, and faith healers.   State legislatures stood on the sidelines and applauded this instantiation of pure democracy.  By midcentury, 15 states had rescinded medical licensing laws; the rest gutted their laws and left them unenforced.  Americans were free to enjoy medical anarchy.

Now, mercifully, our notion of liberty has been reconfigured by two centuries of medical progress.  We don’t just get sick and die.  We get sick and get medical help, and, mirabile dictu, the help actually helps.  In antebellum America, deaths of young people under 20 accounted for half the national death rate.   Now our children don’t die of small pox, cholera, yellow fever, dysentery, typhoid, and pulmonary and respiratory infections before they reach maturity.  Diphtheria no longer stalks them during the warm summer months.  When they get sick in early life, their parents take them to the doctor and they almost always get better.  Their parents, on the other hand, especially after reaching middle age, don’t always get better.  So they get ongoing medical attention to help them live longer and more comfortably with chronic conditions like diabetes, coronary heart disease, inflammatory bowel disease, Parkinson’s, and many forms of cancer.

When our framers drafted the Constitution, the idea of being free to live a productive and relatively comfortable life with long-term illness didn’t compute.  You died from diabetes,  cancer, bowel obstruction, neurodegenerative disease, and any major infection (including, among young women, the infection that often followed childbirth).  A major heart attack usually killed you.  You didn’t receive dialysis and possibly a kidney transplant when you entered kidney failure.  Major surgery, performed on the kitchen table if you were of means or in a bacteria-infested, dimly lit, unventilated public hospital if you weren’t, was all but nonexistent because it invariably resulted in massive blood loss, infection, and death.

So, yes, our framers intended our citizenry to be free of government interference, including an obligatory mandate to subsidize health care for millions of uninsured and underserved Americans.  But then the framers never envisioned a world in which freedom could be safeguarded and extended by access to expert care that relieved suffering, effected cure, and prolonged life.  Nor could they envision the progressive income tax, compulsory vaccination, publicly supported clinics, mass screening for TB, diabetes, and  syphilis, and Medicare.  Throughout the antebellum era, when regular physicians were reviled by the public and when neither regulars nor “alternative” practitioners could stem the periodic waves of cholera, yellow fever, and malaria that decimated local populations, it mattered little who provided one’s doctoring. Many, like the thousands who paid $20.00 for the right to practice Samuel Thomson’s do-it-yourself botanical system, chose to doctor themselves.

Opponents of the Affordable Care Act seem challenged by the very idea of progress.  Their consideration of liberty invokes an eighteenth-century political frame of reference to deprive Americans of a kind of liberty associated with a paradigm-shift that arose in the 1880s and 1890s.  It was only then that American medicine began its transition to what we think of as modern medicine. Listerian antisepsis (and then asepsis); laboratory research in bacteriology, immunology, and pharmacology; laboratory development of specific remedies for specific illnesses; implementation of public health measures informed by bacteriology; modern medical education beginning with the opening of Johns Hopkins Medical College in 1893; and, yes, government regulation to safeguard the public from incompetent practitioners and toxic, sometimes fatal, medications – all were  all part of the transition.

“We hold these truths to be self-evident,” Jefferson begins the second paragraph of the Declaration of Independence, “that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness.”  What Jefferson didn’t stipulate – what he couldn’t stipulate in his time and place – was the hierarchical relationship among these rights.  Now, in the twenty-first century, we are able to go beyond an eighteenth-century mindset in which “life, liberty, and the pursuit of happiness” functions as a noun phrase whose unitary import derives from the political tyrannies of King George III and the British Parliament.  Now we can place life at the base of the pyramid and declare that quality of life is indelibly linked to liberty and the pursuit of happiness.  To the extent that quality of life is diminished through disease and dysfunction, liberty and the pursuit of happiness are necessarily compromised.  In 2012, health is life; it is life poised to exercise liberty and pursue happiness to the fullest.

Why is it unconstitutional to obligate all citizens to participate in a health plan, either directly or through a mandate, that safeguards the right of people to efficacious health care regardless of their financial circumstances, their employment status, and their preexisting medical conditions?  What is it about the term “mandate” that is constitutionally questionable?  When you buy a house in this country, you pay local property taxes that support the local public schools.  (If you’re a renter, your landlord pays your share of the tax out of your rent.)  The property tax functions like the mandate:  It has a differential financial impact on people depending on whether they directly benefit from the system sustained by the tax.  To wit, you pay the tax whether or not you choose to send your children to the public schools, indeed, whether or not you have children.  You are obligated to subsidize the public education of children other than your own because public education, for all its failings, has been declared a public good by the polity of which you are a part.

It is inconceivable that the founding fathers would have found unconstitutional a law that extended life-promoting health care to the roughly 50 million Americans who lack health insurance.  The founding fathers declared that citizens – well, white, propertied males, at least – were entitled to life consistent with the demands and entitlements of representative democracy; their pledge, their Declaration, was not in support of a compromised life that limited the ability to fulfill those demands and enjoy those entitlements.

Of course, adult citizens may repudiate mainstream health care on the basis of their own philosophical or religious  predilections.  Fine.  Americans who wish to pursue health outside the medical mainstream or, in the manner of medieval Christians, to disavow corporeal well-being altogether, are free to do so.  But they should not be allowed to undermine social and political arrangements, codified in law, that support everyone else’s right to pursue life and happiness through twenty-first century medicine.

The concept of medical freedom dominated the antebellum period and resurfaced during the early twentieth century, when compulsory childhood vaccination and Oklahoma Senator Robert Owen’s proposed legislation to create a federal department of public health spurred the formation of the Anti-Vaccination League of America, the American Medical Liberty League, and the National League for Medical Freedom.   According to these groups, medical freedom was incompatible not only with compulsory vaccination, but also with the medical examination of school children, premarital syphilis tests, and municipal campaigns against diphtheria.  In the 1910s, failure to detect and treat contagious bacterial disease was a small price to pay for freedom from what medical libertarians derided as “allopathic knowledge.”   These last gasps of the Jacksonian impulse were gone by 1930, by which time it was universally accepted that scientific medicine was, well, scientific, and, as such, something more than one medical sect among many.

After World War II,  when the American Medical Association mounted its holy crusade against President Harry Truman’s proposal for national health care, “medical liberty” came into vogue once more, though its meaning had changed.  In antebellum American and again in the 1910s, it signified freedom to cast off the oppressive weight of “regular” medicine and pick and choose among the many alternative sects.  In the late 1940s, it signified freedom from federally funded health care, which would contaminate the sacrosanct doctor-patient relationship.  For the underserved, such freedom safeguarded the right to remain untreated.  The AMA’s legerdemain elicited ridicule by many, the prominent journalist Milton Mayer among them.  “Millions of Americans,” Mayer wrote in Harper’s in 1949, “geographically or economically isolated, now have access to one doctor or none.  The AMA would preserve their present freedom of choice.”  In 1960, the medical reporter Selig Greenberg mocked  medical free choice as a “hoary slogan” based on “the fatuous assumption that shopping around for a doctor without competent guidance and paying him on a piecemeal basis somehow guarantees a close relationship and high-quality medical care.”[1]

Now the very notion of medical freedom has an archaic ring.  We no longer seek freedom from the clutches of mainstream medicine; now we seek  freedom to avail ourselves of what mainstream medicine has to offer.  At this singular moment in history, in a fractionated society represented by a bitterly divided Congress, access to health care will be expanded and safeguarded, however imperfectly, by the Affordable Health Care Act.  Those who opt out of the Act should pay a price, because they remain part of a society committed to health as a superordinate value without which liberty and the pursuit of happiness are enfeebled.  To argue on about whether the price of nonparticipatory citizenship in the matter of health care can be a tax but not a mandate is obfuscating wordplay.  And the health and well-being of we the people should not be a matter of wordplay.


[1] Milton Mayer, “The Dogged Retreat of the Doctors,” Harper’s Magazine, 199:25-37, 1949, quoted at pp. 32, 35; Silas Greenberg, “The Decline of the Healing Art,” Harper’s Magazine, 221:132-137, 1960, quoted at p. 134.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.

“Socialized Medicine,” anyone?

The primary season is upon us, which means it’s time for Republicans to remind us of the grave perils of “socialized medicine.”  One-time candidate Michele Bachmann accuses Mitt Romney  of “put[ting] into place socialized medicine” when governor of Massachusetts.  Newt Gingrich, rejecting Romney’s defense of the Massachusetts law as something other than socialist, declares that  “Individual and employer mandates are bad policy leading down the road to socialized medicine, whether the mandates are adopted at the federal level or the state level.”  Ron Paul, not to be outdone, derides our health care system as “overly corporate and not much better than a socialized health care system.”  Rick Santorum mournfully announces that socialized medicine is “exactly where we’re headed.”  And then of course there is that noncandidate and subtle political thinker Sarah Palin, who apparently tolerated Canadian single-payer health care well enough when it was available to her and her family members, but never fails to lambast the health care reform bill of 2010 (“Obamacare”) as the great evil, the capitulation to socialist medicine that will lead us straight into the bowels of socialist hell.

As a historian of ideas, I am confused.  What exactly do these Republicans mean by “socialized medicine” and, more generally, by “socialism”?  Are they referring to the utopian socialism of the early nineteenth century that arose in the wake of the French Revolution, the socialism of Charles Fourier, Henri Saint-Simon, and Joseph Le Maistre?  Are they referring to Marxist socialism and, if so, which variant?  The socialism of the early Marx, the Marx of the  economic and philosophical manuscripts of 1844 and The German Ideology or the socialism of the late Marx, the Marx of Das Kapital?  It is difficult to imagine the candidates rejecting the conservative socialism of Otto Bismarck, the German Iron Chancellor who, during the 1870s and 1880s, wed social reform to a conservative vision of society.  But then again they might:  Bismarck’s reforms, which included old-age pensions, accident insurance, medical care, and unemployment insurance, paved the way for the triumph, despite Bismarck’s own antisocialist laws, of Germany’s Social Democratic Party in the early twentieth century.

Perhaps the Republicans mean to impugn a broader swath of post-Marxist reformist socialism (also termed “democratic socialism”).  Does their antipathy take in the British liberal welfare reforms of David Lloyd George that from  around 1880 to 1910 constructed Britain’s social welfare state?  After all, Britain’s National Insurance Act of 1911 provided for health insurance, and many of Lloyd George’s  acts aimed at the health and well-being of British children.  Child labor laws, medical inspection of schools, and medical care for school children via free school clinics were among them.  Certainly all the candidates would repudiate FDR’s New Deal.  Depression or no, it was a medley of socialist programs that culminated in a social security program that workers could not opt out of.  But then again, perhaps the candidates do not understand socialism as the cumulative protections of democratic socialism.  Maybe the socialism they impugn is only hard-core late Marxism and its transmogrification after 1917 into Soviet Marxism-Leninism, both of which now slumber peacefully in the dustbin of history.  I don’t know.  Does anyone?  Maybe some of these candidates only see red when contemplating employment of physicians by the state.

When it comes to “socialized medicine,” just how far do the Republicans seek to turn back the clock?   Does more than a century of social welfare reform have to go?  Certainly they must repudiate Medicare and Medicaid, whose passage in 1965 was, with respect to the elderly and indigent, socialism pure and simple; for the AMA these programs sounded the death knell of democracy.  But why stop there?  If they really want to root out medical socialism, they can hardly condone Medicare’s precursor, the Kerr-Mill Act of 1960 that made federal matching funds available to states that underwrote the costs of health care for their indigent elderly.

And what of the FDA, that competition-draining, creativity-stifling offspring of Rooseveltian socialist thinking.  Who is the government to tell medical equipment manufacturers which devices they may sell to doctors and the public?  The 1976 Medical Devices Amendments to the Federal Food, Drug and Cosmetic Act of 1938 would have to go.  The more than 700 deaths and 10,000 injuries attributed to defective cardiac pacemakers and leaky artificial heart valves by the Cooper Commission in 1970, not to mention the 8,000 women injured (some left sterile) by their faulty contraceptive Dalkon Shields – this was a small price to pay for an open marketplace that encouraged and rewarded innovation.  The 1962 Kefauver–Harris Amendments to the Federal Food, Drug and Cosmetic Act of 1938, which arose in the wake of the thalidomide tragedy of 1961, would probably fare no better.  After all, these amendments dramatically expanded the FDA’s authority over prescription drugs by requiring drug companies to conduct preclinical trials of toxicity and then present the FDA with adequate and well controlled studies of drug effectiveness  before receiving regulatory approval.  I wonder if principled antisocialists can even abide the FDA-enforced distinction between prescription-only and nonprescription drugs, as codified in the 1951 Durham-Humphrey Amendment to the 1938 Act.  Before then, Americans did just fine self-medicating without government interference.  Sure they did.  Citizens of the late 30s could be relied on to decide when to take the toxic sulfonamides (which depressed white cell counts and led to anemias), just as citizens of the late 40s knew enough pharmacology and bacteriology to decide when and in what dosages to use the potent antibiotic “wonder drugs,” all of which could be obtained over-the-counter or directly from pharmacists until the 1951 Act.

But why stop there?  Perhaps Republican political philosophy obliges the candidates to repudiate the Federal Food, Drug and Cosmetic Act in toto.  After all, it authorized the FDA, a federal agency, to review the safety and composition of new drugs before authorizing their release.  Yes, the legislation arose in the wake of 106 deaths the preceding year – many children among them – from sales of the Tennessee drug firm S. E. Massengill’s Elixir Sulfanilamide.  The Elixir was a sweet-tasting liquid sulfa drug that – unbeknown to anyone outside the company — used toxic diethylene glycol (a component of brake fluid and antifreeze) as solvent.  But, hey, this was free-market capitalism in action.  Sure, hundreds more would have died if all 239 FDA inspectors hadn’t tracked down 234 of the 240 gallons of the stuff already on the market.  But is this really any worse than having 10,000 or so European and Japanese kids grow up with flippers instead of arms and hands because their pregnant mothers, let down by the regulatory bodies of their own countries, ingested Chemie Grünenthal’s sedative thalidomide to control first-trimester morning sickness?  A free medical marketplace has its costs, dead kids, deformed kids, and sterile women among them.  Perhaps, in the Republican vision of American health care, this marketplace had every right to bestow on Americans their own generation of thalidomide babies, not just the small number whose mothers received the drug as part of the American licensee’s advance distribution of samples to 1,267 physicians.

If we’re going to turn back the clock and recreate a Jacksonian medical universe free of intrusive, expensive, innovation-stifling, rights-abrogating big government, let’s go the full nine yards.  Let’s repudiate the Pure Food and Drugs Act of 1906, which compelled drug companies to list the ingredients of drugs on the drug labels.  Sure, prior to the act most remedies aimed at children were laced with alcohol, cocaine, opium, and/or heroin, but was this so bad?  At least these tonics, unlike Elixir Sulfanilamide, didn’t kill the kids, and the 1906 Act did put us on the path to government overregulation.  And, anyway, it’s up to parents, not the federal government, to figure out what their kids ingest.  Let them do their own chemical analyses (or better yet, contract unregulated for-profit labs to do the analyses for them) and slug it out with the drug companies.

And, while we’re at it, let’s roll back the establishment in 1902 of the brazenly socialistic Public Health and Marine Hospital Service, with its “big government” division of pathology and bacteriology.  Okay, it did a few things Republican candidates would likely applaud, like preventing incoming immigrants from coming ashore with infectious diseases like cholera, yellow fever, and bubonic plague.  But the Service couldn’t leave well enough alone. With its federal budget and laboratory of government employees, it went on to identify infectious diseases like typhoid fever, tularemia, and undulant fever.  Then, during World War I, after its name had been shortened to the Public Health Service, it isolated the organisms responsible for epidemic meningitis and developed tetanus antitoxin and antityphoid vaccine.  But, hey, private enterprise of the time would have addressed these issues better and more cost effectively, right?  And it wouldn’t have placed us on the road to socialist perdition.

Compulsory vaccination for smallpox and diphtheria?  State laws that beginning in 1893 required public schools to exclude from enrollment any student who could not present proof of vaccination?  Forget it.  States and municipalities had no right forcibly to intrude into the lives of children with their public health inspectors, followed by school physicians with their vials of toxin-antitoxin.  What was this if not socialist medicine, with the state abrogating the rights of parents and school principals alike – the former with the right to keep their children unvaccinated, that they might contract infection and pass it on to classmates and family members; the latter with the right to keep school enrollment as high as possible without government interference.

Here’s the point of this exercise in conjecture:  If we’re going to have a national debate about health care, then our candidates must cease and desist from using evocative words that incite fear and loathing but mean nothing because they mean anything and everything.  You can’t have a debate without people capable of debate, which is to say, people who grasp ideas as something other than sound bites that mobilize primitive emotions.  Debaters make arguments and cite evidence that support them; they don’t throw out words and wait for a primal scream.

It would be nice if we had presidential candidates willing and able to explain their take on specific ideas and then wrestle with the applicability of those ideas to the real-life problems of specific groups of Americans.  It would be nicer still if all this explaining and wrestling and applying were informed by the lessons of history.  I believe we will have such debates shortly after hell freezes over.  Therefore, I offer my own ideational stimulus package to inch us toward this goal.  I propose an Act of Congress that proscribes the use of certain words and phrases among all presidential candidates.  Each time a candidate uses a proscribed word or phrase in a campaign speech, a TV commercial,  or an internet posting, he or she, if nominated, forfeits one electoral vote earned in the general election.  In the realm of health care, “socialism,” “socialist medicine,’’ “big government,” “death panels,” “overregulation,”  “the people,” and “the American way” would top the list.  Such terms cannot be part of a national debate because they do not promote reasoned exchange.  They have emotional resonance but nothing else.  In fact, they preclude debate by allowing the word or phrase in question to carry an implicit meaning that reaches consciousness only as a gut-churning abstraction.  Gut-churning abstractions, be it noted, tend to be historically naïve and empirically empty.

So I end where I began:  What exactly do our Republican candidates mean by “socialized medicine” other than a global repudiation of the health care reform bill of 2010?  Do they mean that medicine was just socialist enough before the bill passed, but that specific components of the bill – like preventing insurers from denying coverage to people with preexisting conditions – take the country to a point of socialist excess serious enough to abrogate the new protections the bill affords uninsured and underinsured  Americans.  Or perhaps American health care, even before the legislation, was simply too socialist, so that it becomes incumbent on our elected leaders to turn back the clock, undo past legislative achievements, reverse specific governmental policies, and disembowel certain regulatory agencies.  But if the latter, exactly which laws and policies and agencies must be sacrificed on the altar of a free and open medical marketplace?   I don’t know what the Republican candidates have in mind, but I’m all ears – once they stop lobbing word grenades and actually make an argument.

Copyright © 2012 by Paul E. Stepansky.  All rights reserved.