The Problem with the New Patient Autonomy

The neurology team shuffled single-file into the patient’s small room. The patient, probably in his 30s, had black hair, brown eyes, and an unsettling
demeanor. He glared icily at us from his bed, the blankets covering him up to the neck. His pale brow furrowed even more noticeably as all nine of us
intruded on his privacy. In a scene out of a futuristic movie, EEG (electroencephalogram) leads on his scalp connected his
head via wires to a screen showing squiggly lines representing brain activity; a small video camera attached to the screen monitored the patient’s movement. He had come to the hospital overnight after falling and shaking, a story worryingly suggestive of a seizure.

Brain waves on EEG
Image via Shutterstock

An electroencephalogram records neuronal signals in the brain and is used by neurologists to diagnose seizure activity. When a
patient has a seizure, which can manifest as full-body convulsions, a family member in the room pushes a button on the machine which starts the video
camera recording the patient’s movements. Then, neurologists examine the movements in the video and the waves tracked by the EEG to see if they are consistent with seizures.

There are different kinds of seizures depending on which part of the brain is affected. Symptoms range from a loss of attention for a few seconds (absence seizures) to full-body convulsions
which we typically associate with seizures (generalized tonic-clonic seizures). Different conditions can cause these events — for instance, high fever as a child (febrile seizures) and brain tumors can induce
hyper-excitability in the brain. If the seizure does not stop, a patient can enter status epilepticus, a state of prolonged epileptic activity that can cause permanent damage.

Having a seizure, then, can be very serious business. Physicians must perform a medical work-up to ensure that the patient is not at great risk. In
addition to an EEG, our patient’s neurologist ordered labs and a CT scan of the brain. However, these tests were all negative. Even overnight, when the patient and his mother both claimed that the patient seized, there were no abnormal
electrical discharges on the EEG.

Indeed, not all physical manifestations of seizures indicate the presence of legitimate seizure activity in the brain, which is why the EEG is such a valuable diagnostic
tool. It turns out that certain patients may believe they are having seizures when they are actually having pseudoseizures or psychogenic non-epileptic seizures. To most observers,
pseudoseizures look exactly like generalized tonic-clonic seizures. Patients shake, tense up, and flail violently and frighteningly. However, certain
differences exist that distinguish them from each other. During pseudoseizures, EEGs show no abnormal brain activity, patients do not bite their tongues (this can occur with real seizures), and patients do not
respond to anti-epileptic or anti-seizure medications. It’s not that patients undergoing pseudoseizures aren’t sick, it’s just that their sickness has nothing to do with neurological pathology or seizure activity.

Frequently, patients who experience pseudoseizures do have underlying psychiatric disorders, like anxiety or PTSD, but not always. Other risk factors and
triggers include interpersonal conflicts, childhood abuse, and past sexual abuse. Seemingly, then, a pseudoseizure is a symptom of a psychiatric illness. Another factor that distinguishes pseudoseizures is that patients are conscious during the events. I’ve seen one attending push down hard on a patient’s hand during a pseudoseizure while telling the patient he was going to do so. The
patient suddenly awoke before the attending pushed hard enough to hurt the patient. (If the patient was having a generalized seizure, he would
not have felt anyone pressing on his hand nor would he have heard anyone giving him a verbal warning of it.)

In explaining the concept of pseudoseizures to a patient who has them, one must take great care. If a physician tells a patient, “these are not real — it
is in your head, so grow up,” no one will benefit. Psychiatric illness cannot be fixed with a stern rebuke. One must explain that these are not
seizures and that it will take time to fix whatever is happening, but anti-seizure medications will not help. (While there are no medications for pseudoseizures, behavioral therapy can be efficacious.) Through this
conversation, one hopes the patient will seek help from a psychiatrist.

The patient we saw that morning did have pseudoseizures rather than seizures, as the EEG and the video of his body movements indicated. Additionally, and
tragically, he had a horrific childhood and had been physically abused by his father. The attending explained all this very gently in the course of nearly
twenty minutes. When he finished, the patient and his mother both burst out indignantly: How could this physician ignore the symptoms? How could he be so
callous as to dismiss this disease? Why wouldn’t he prescribe medications? Why did he not order an MRI of the patient’s brain (an expensive type of imaging) to further investigate the
cause of this? In the patient’s words: “I’m not believing any of this bullshit.” Although the physician calmly tried to explain everything again, the patient
refused to listen and eventually the team left to continue rounding. Still enraged, the patient called the customer-service department of the
hospital and continued to argue with the team throughout the day. Eventually, after numerous disputes, our attending physician caved (and who could blame him given that there were nineteen other sick patients on the service who needed his attention?): the patient got what
he wanted, an MRI study which showed nothing abnormal.

Unfortunately, this
is a weekly if not a daily experience in hospitals across the country. Patients frequently make inappropriate requests of physicians, which are subsequently granted. What has brought our system to the point where a patient issues orders and the
physician must about-face from a medically sensible course?

*   *   *

In ancient times, patients had very little, if any, autonomy, as R. Kaba and P. Sooriakumaran point out in their 2007 article, The Evolution of the Doctor-Patient Relationship in the International Journal of Surgery. Doctors decided what was good for patients and what wasn’t. There was no informed consent — a doctor told a
patient what the patient needed and expected him or her to comply.

This interaction may have evolved from the ancient Egyptian “priest-supplicant” relationship, in which magicians and priests with access to gods conjured up
cures for various medical disorders. The patient, without a modicum of holiness, had to supplicate to the priest, or father figure, in
order to get well. Even for the Greeks, who developed slightly more scientific ways of approaching disease and more ethical ways of approaching the patient
(see the Hippocratic Oath), the doctor was a paternalistic figure granting “hard-line
beneficence” to the patient. All this was akin to a parent-child relationship, a model for the doctor-patient interaction that was considered normal even in the
mid-twentieth century, as I wrote in my essay on vaccines for The New Atlantis:

The unchecked authority of medical experts in those days allowed doctors to trammel the rights of both patients and research subjects. Many of those whose
research laid the foundations for modern vaccines, such as Jonas Salk, Maurice Hilleman, and Stanley Plotkin, tested their vaccines on mentally
retarded children. Starting in the mid-1950s and continuing for about fifteen years, the infectious-disease doctor Saul Krugman fed hepatitis virus to
severely disabled residents of the Willowbrook State School in order to study the virus. The enshrinement of patient autonomy in the 1970s was in part a
response to these very serious ethical problems.

Recently, though, things have changed:

Over the past few decades, however, the boat has tipped to the other side. Now, patients rate doctors online at sites like Healthgrades or Yelp or Vitals
the same way one rates a restaurant. This puts pressure on physicians to give patients what they want rather than what they need in order to garner more
business. The government bases Medicare reimbursements, in part, on patient satisfaction scores, putting further pressure on physicians to make patients
happy [In fact, patient satisfaction score surveys

play a significant role in determining how much money hospitals receive from Medicare
.] Dr. Richard Smith, former editor of the British Medical Journal, has explained that the increasing power of patients is bringing us to a point where
“there is no ‘truth’ defined by experts. Rather there are many opinions based on very different views and theories of the world.” If a patient wants a test
or procedure, he or she can have it. The same goes for refusing it, even against the advice of doctors.

This modus operandi of allowing patient satisfaction to dictate medical care is becoming more and more common. It is even encouraged. Kai Falkenberg, a
journalist, notes in a must-read 2013 article in Forbes,

Nearly two-thirds of all physicians now have annual incentive plans, according to the Hay Group, a Philadelphia-based management consultancy that surveyed
182 health care groups. Of those, 66% rely on patient satisfaction to measure physician performance; that number has increased 23% over the past two years.

And that’s not all, according to her article. These metrics encourage physicians to do things that are not always in the best interests of the patient:

In a recent online survey of 700-plus emergency room doctors by Emergency Physicians Monthly, 59% admitted they increased the number of tests they
performed because of patient satisfaction surveys. The South Carolina Medical Association asked its members whether they’d ever ordered a test they felt
was inappropriate because of such pressures, and 55% of 131 respondents said yes. Nearly half said they’d improperly prescribed antibiotics and narcotic
pain medication in direct response to patient satisfaction surveys.

Satisfying patients and practicing good medicine are not always the same. Data on this abounds. A 2013 study by physicians at Johns Hopkins demonstrated little evidence that patient satisfaction
corresponds to the quality of surgical care. Furthermore, in a 2012 study,
physicians at UC Davis found that increased patient satisfaction scores were associated with higher health care expenditures and even increased
mortality.

Of course, I’m not arguing against patient autonomy or patient satisfaction. People ought to have a voice in their healthcare. But attributing excessive importance to patient satisfaction scores stymies medicine and encourages confusion among patients who don’t necessarily know what is and
isn’t medically appropriate, thus putting them at risk. This is borne out in the story of our pseudoseizing patient, and in the data from studies. If we,
as physicians, merely do what the patient asks of us, we are no longer practicing medicine; we are technicians for hire, something I pointed out in a previous post on the purpose of medicine.
Evidently, then, the push for patient autonomy can hurt both patients and doctors.

Indeed, the solution is not to incentivize the physician to give the patient what he or she wants. Nor is it to force the patient to do only what the
physician demands. What we need is balance. As suggested in a 1996 article in the Annals of Internal Medicine, what we need is not a consumer model but a model that promotes “an intense collaboration between patient and physician so that patients can autonomously
make choices that are informed by both the medical facts and the physician’s experience.” Doctors don’t have a monopoly on medical truth but they have
years of education and experience and they must help patients to make a reasoned choice.

Physicians need to provide patients with information, evidence, and guidance. They need to negotiate with patients, just as patients need to negotiate with
doctors. And sometimes physicians need to draw a hard line. If a doctor encounters a patient who demands something a physician is not comfortable with or if the “chosen course violates the physician’s fundamental values” despite negotiations and
conversations, “he should inform the patient of that fact and perhaps help the patient find another physician.”

Yes, final choices belong to patients and not doctors. But both must invest a lot in order to allow patients to make informed decisions. We should not let the mistaken primacy of satisfaction surveys and radical autonomy obstruct this negotiation — there is more at stake for all of us than just an
extraneous MRI.

Killer Robots: How could a ban be verified?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Here’s my latest dispatch from the second major diplomatic conference on Lethal Autonomous Weapons Systems, or “killer robots” as the less pretentious know them. (A UN employee, for whom important-sounding meetings are daily background noise, approached me in the cafeteria to ask where she could get a “Stop Killer Robots” bumper sticker like the one I had on my computer, and said she’d have paid no attention to the goings-on if that phrase hadn’t caught her eye.) The conference continued yesterday with what those who make a living out of attending such proceedings like to describe as “the hard work.”

Wishful thinking on Strategy

Expert presentations in the morning session centered on the reasons why militaries are interested in autonomous systems in general and autonomous weapons systems in particular. As Heather Roff of the International Committee for Robot Arms Control (ICRAC) put it, this is not just a matter of assisting or replacing personnel and reducing their exposure to danger and stress; militaries are also pursuing these systems as a matter of “strategic, operational, and tactical advantage.”

Roff traced the origin of the current generation of “precision-guided” weapons to the doctrine of “AirLand Battle” developed by the United States in the 1970s, responding then to perceived Soviet conventional superiority on the European “central front” of the Cold War. Similarly, Roff connected the U.S. thrust toward autonomous weapons today with the doctrine of “AirSea Battle,” responding to the perceived “Anti-Access/Area Denial” capabilities of China (and others).

Some background: The traditional American way of staging an overseas intervention is to park a few aircraft carriers off the shores of the target nation, from which to launch strikes on land and naval targets, plus to mass troops, armor, and logistics at forward bases in preparation for land warfare. But shifts in technology and economic power are undermining this paradigm, particularly with respect to a major power like China, which can produce thousands of ballistic and cruise missiles, advanced combat aircraft, mines, and submarines. Together, these weapons are capable of disrupting forward bases and “pushing” the U.S. Navy back out to sea. This is where the AirSea Battle concept comes in. As first articulated by military analysts connected with Center for Strategic and Budgetary Analysis and the Pentagon’s Office of Net Assessment, the AirSea Battle concept is based on the notion that at the outset of war, the United States should escalate rapidly to massive strikes against military targets on the Chinese mainland (predicated on the assumption that this will not lead to nuclear war).

Now, from the narrow perspective of a war planner, this changing situation may seem to support a case for moving toward autonomous weapon systems. For Roff, however, the main problems with this argument are arms races and proliferation. The “emerging technologies” that underlie the advent of autonomous systems are information technology and robotics, which are already widely proliferated and dispersed, especially in Asia. Every major power will be getting into this game, and as autonomous weapon systems are produced in the thousands, they will become available to lesser powers and non-state actors as well. Any advantages the United States and its allies might gain by leading the world into this new arms race will be short-term at best, leaving us in an even more dangerous and unstable situation.

Autonomous vs. “Semi-Autonomous”

Afternoon presentations yesterday focused on how to characterize autonomy. (I have written a bit on this myself; see my recent article on “Killer Robots in Plato’s Cave” for an introduction and further links.) I actually like the U.S. definition of autonomous weapon systems as simply those that can select and engage targets without further human intervention (after being built, programmed, and activated). The problems arise when you ask what it means to “select” targets, and when you add in the concept of “semi-autonomous” weapons, which are actually fully autonomous except they are only supposed to attack targets that a human has “selected.” I think this is like saying that your autonomous robot is merely semi-autonomous as long as it does what you wanted — that is, it hasn’t malfunctioned yet.

I would carry the logic of the U.S. definition a step further, and simply say that any system is (operationally) autonomous if it operates without further intervention. I call this autonomy without mystery. It leads to the conclusion that, actually, what we want to do is not to ban everything that is an autonomous weapon, but simply to avoid a coming arms race. This can be done by presumptively banning autonomous weapons, minus a list of exceptions for things that are too simple to be of concern, or that we want to allow for other reasons.

Implementing a ban of course raises other questions, such as how to verify that systems are not capable of operating autonomously. This might seem to be a very thorny problem, but I think it makes sense to reframe it: instead of trying to verify that systems cannot operate autonomously, we should instead seek to verify that weapons are, in fact, being operated under meaningful human control. For instance, we could ask compliant states to maintain encrypted records of each engagement involving any remotely operated weapons (such as drones). About two years ago, I along with other ICRAC members produced a paper that explores this proposal; I would commend it to others who might have felt frustrated by some of the confusion and babble during the conference yesterday afternoon.

Autonomy and Responsibility

The National Intelligence Council has just published one of
its periodic forays into thinking about the future: Global Trends
2030: Alternative Worlds
.
As even the title suggests, the report is
full of carefully qualified projections and scenarios, often noting the ambiguity
of technological development—the truism that the same technology can produce
both good and bad outcomes depending on how it is deployed. In its relatively
brief thematic discussion of human augmentation, however, there is really nothing
said about specific downsides of augmentation technologies beyond noting the
likelihood of their inegalitarian distribution over the next 15-20 years, a
problem which “may require regulation.” Instead, the passage closes with the
sentence, “Moral and ethical challenges to human augmentation are inevitable.”

Apparently, while it is helpful to anticipate what
enhancement technologies might allow in the future, there is nothing to be
gained by trying to anticipate what the moral and ethical objections to them
might be. Of course, it would be wrong not to acknowledge that such objections
will exist, but it is hardly worthwhile to actually attempt to think about
them.
This largely symbolic bow to ethics is common enough in such
reports, perhaps only to be expected. It is one of those moments we have noted
repeatedly at Futurisms, where the debate over human enhancement meets up with
our culture’s democratic libertarianism and moral relativism. Plainly, we don’t
think this outlook is a sound footing upon which to meet the undeniable
challenges of the future.
Indeed, we are hardly short on reasons to think we ought to
flee whenever possible from thinking seriously about moral distinctions, in the
name of protecting autonomy or free choice. Our decades-long social experiment
of eliminating “stigmas” and allowing people more and more to do their own
things has contributed to the weakening and impoverishment of families and
communities. Belief in what is now being called “neurodiversity
has been a factor in making it harder to get the mentally ill the help they
need. If the latest election is any indication, the progressives among us count
it a boon when one
more casual method to escape from reality is legalized
— presumably
eventually to be used, like the others, to shore up precarious state finances.
Periodically, some tragic event reminds us of the cost of our
laissez-faire morality, and an increasingly ritualized period of introspective
mourning will commence, one which probably reflects less well on our ethical sensitivity
than we might like to think, even though it serves its cathartic function and
we soon return to our nonjudgmental business as usual.
And of course that business as usual is not so bad for those
of us who are more of less insulated from its worst effects (even though no
insulation is perfect) and therefore have the bourgeois luxury of arguing about
the merits of human enhancement. But Global Trends notes as one of its “tectonic
shifts” how “individuals and small groups will have greater access to lethal
and disruptive technologies…enabling them to perpetuate large-scale violence —
a capability formerly the monopoly of states.” Some of these disruptive
technologies are of course directly related to human enhancement. Will we have
the wherewithal to say “no” or “not you” before these technologies become
lethal and disruptive? Why should we expect that, when our flabby moral judgments
have so weakened out ability to respond to the ideas that make even some of our
present technological capacities dangerous?
Although there is little sign of it prospectively, I would
like to believe that eventually, the greater moral challenge will elicit
greater moral effort. But recovering what that means will not be easy. It is no
sure bet that we will suddenly find the moral strength to deal with powers over
nature and ourselves yet greater than what we have now, particularly when those
advocating on their behalf will have been complicit in keeping us weak.

Tea Partying Transhumanists?

The New York Times published last month an intriguing exploration by New School professor J. M. Bernstein of the philosophical underpinnings of the Tea Party movement. Does this analysis remind you of any other movement?:

Where do such anger and such passionate attachment to wildly fantastic beliefs come from?…

Tea Party anger is, at bottom, metaphysical, not political: what has been undone by the economic crisis is the belief that each individual is metaphysically self-sufficient, that one’s very standing and being as a rational agent owes nothing to other individuals or institutions. The opposing metaphysical claim, the one I take to be true, is that the very idea of the autonomous subject is an institution, an artifact created by the practices of modern life: the intimate family, the market economy, the liberal state.

…[H]uman subjectivity only emerges through intersubjective relations, and hence how practices of independence, of freedom and autonomy, are held in place and made possible by complementary structures of dependence….

All the rhetoric of self-sufficiency, all the grand talk of wanting to be left alone is just the hollow insistence of the bereft lover that she can and will survive without her beloved….

The Tea Party rhetoric of taking back the country is no accident: since they repudiate the conditions of dependency that have made their and our lives possible, they can only imagine freedom as a new beginning, starting from scratch.

The whole post is fascinating and, even if it’s overwrought, it’s worth reading at the level it was intended. But try reading it too as about a certain other movement.