Empathy in Medicine

“You’ll h-h-h-have to… excuse m-m-m-me. I’m a little slow because I had a stroooooke,” he told us before we explained to him what his wife’s treatment would be.
His voice was nasal and his speech deliberate as he slowly and poorly enunciated each word. He wore sweatpants and a long-sleeved shirt with a
blue and white hat pulled down over his eyes. Stubbornly refusing to stay tucked away, gray hairs peeked out the sides of his chapeau and covered
his ears. He looked to be in his seventies. His wife lay on the bed in a hospital gown, slippers still on. She wore a winter hat that
concealed a bald scalp, one of the many side effects of potent cancer medications. Her eyebrows were gone and her sinewy frame was exaggerated as cachexia set in. She needed extra rounds of chemotherapy for metastatic cancer.

Image via Flickr: Tim Hamilton (CC)

That afternoon, I ran into the husband in the hospital lobby. He had just bought food and was going to bring it back to his wife, but he was heading the
wrong way. He asked a fellow student and me (he recognized both of us) how he could get back to his wife’s room and we pointed him in the right direction.
We watched him shuffle tow
ards his wife in the cancer ward. This couple was neither wealthy nor well-educated; they were suffering and attempting to
navigate the healthcare system as well as the overwhelming size of an academic hospital. They seemed helpless together.

It’s in such moments, as in many others, when empathy wells up in medical practice. I
could clearly imagine myself or my family members in their position. Their emotions became all too familiar and upsetting to me. I wanted to do everything
in my power to help them and to fix their situation. But this strong sense of identification seemed odd given how brief my interaction with them had been.

In reality, however, such a feeling is not so unusual. Robert Louis Stevenson, the famous nineteenth-century Scottish writer, co-authored a short story called The Ebb-Tide. It is an account of three criminals who steal a ship and the deeply
troubling moral situation they subsequently encounter. When one of them falls sick, Stevenson describes the healthy comrades’ feelings:

A profound commiseration filled them, and contended with and conquered their abhorrence. The disgust attendant on so ugly a sickness magnified this
dislike; at the same time, and with more than compensating strength, shame for a sentiment so inhuman bound them the more straitly to his service; and even
the evil they knew of him swelled their solicitude, for the thought of death is always the least supportable when it draws near to the merely sensual and
selfish.

Image via Shutterstock

Given the power of this selfless commiseration shouldn’t we cultivate it in medicine? No doubt
it will help us to act altruistically even when we see the worst in patients or colleagues, thus leading to a better bedside manner and better patient
care. Jean-Jacques Rousseau, the Genevan philosopher, saw such feelings differently,
however. In

Emile, or On Education
, Rousseau points out that empathy is really an outlet for selfish passions, even if its effects can be positive. Rousseau writes that,

if the enthusiasm of an overflowing heart identifies me with my fellow-creature, if I feel, so to speak, that I will not let him suffer lest I should suffer too, I care for him because I care for myself, and the reason of the precept is found in nature herself, which inspires me with the desire for my own welfare wherever I may be.

Such cynicism about the underlying nature of empathy still has its advocates today. In the September 2014 Boston Review, Yale psychology professor Paul Bloom questions our high regard for empathy. I recommend reading his essay and his
exchange with other scholars, including Peter Singer, Sam Harris, and Leslie Jamison.
Bloom points out the dangers of unchecked empathy: “Strong inclination toward empathy comes with costs. Individuals scoring high in unmitigated communion
report asymmetrical relationships, where they support others but don’t get support themselves. They also are more prone to suffer depression and anxiety.”
And this is especially the case, Bloom points out, in the medical field in which a doctor can lose a sense of objectivity and a cool head in an emergency.
Bloom distinguishes between cognitive empathy, which is empathy tempered by rational feeling, and emotional empathy, which can be dangerous. Bloom writes
of an older relative of his in the hospital:

He values doctors who take the time to listen to him and develop an understanding of his situation; he benefits from this sort of cognitive empathy. But
emotional empathy is more complicated. He gets the most from doctors who don’t feel as he does, who are calm when he is anxious, confident when he
is uncertain. And he particularly appreciates certain virtues that have little directly to do with empathy, virtues such as competence, honesty,
professionalism, and respect.

This makes sense. I can imagine how exhausting it must be to feel so strongly about every patient. It would cause burnout and depression. But the
psychologists Lynn O’Connor and Jack Berry respond to Bloom in the
following way: “We can’t feel compassion without first feeling emotional empathy. Indeed compassion is the extension of emotional empathy by means of
cognitive processes. Only if we have the capacity to feel empathy toward loved ones can this sentiment be generalized by the imagination and extended to
strangers.” This addition to Bloom’s argument is absolutely vital. Both types of empathy are important.

Such balanced empathy keeps the physician honest. There are many times when, in a rush to complete the work of the day or under the pressure to see every patient,
physicians take their frustrations out on patients. Empathy tames our impulsivity and gives us pause. It forces us to consider the actions we
are about to take. And we can project empathy using reason and emotion. If an elderly woman is being difficult, instead of reacting with frustration and
annoyance we can step back and ask ourselves, “What if is this were my grandmother or my mother? How would I want her physician to behave?” To do this is
not easy, but it can make an immense difference in how one interacts with a patient.

Empathy may or may not spring from selfishness, and too much of one aspect of it (like too much of any emotion) can be a bad thing. But physicians do need
empathy, both the emotional empathy that we feel towards some and the cognitive empathy that we can extend toward all. In the cogs of an impersonal
medical system, it leads to the dignified treatment of a suffering patient.

Robin Hanson on Why We Should “Forget 9/11”

A few days ago, on the tenth anniversary of the September 11th terrorist attack, George Mason University economics professor Robin Hanson, who is influential among transhumanists, wrote a blog post arguing that we should “Forget 9/11.” Why? Well, partly because of cryonics:

In the decade since 9/11 over half a billion people have died worldwide. A great many choices could have delayed such deaths, including personal choices to smoke less or exercise more, and collective choices like allowing more immigration. And cryonics might have saved most of them.Yet, to show solidarity with these three thousand victims, we have pissed away three trillion dollars ($1 billion per victim), and trashed long-standing legal principles. And now we’ll waste a day remembering them, instead of thinking seriously about how to save billions of others. I would rather we just forgot 9/11. Do I sound insensitive? If so, good — 9/11 deaths were less than one part in a hundred thousand of deaths since then, and don’t deserve to be sensed much more than that fraction. If your feelings say otherwise, that just shows how full fricking far your mind has gone.

Hanson’s post may have been “flamebait” — but we should assume that he sincerely means what he has written, and read it as charitably as possible. His concern about matters of public health is admirable (although one wonders how much more public attention could be paid to the importance of exercising and not smoking, and whether paying attention to 9/11 was really a significant blow to those efforts). And many would agree that our government could have better allocated its money to save, lengthen, and improve lives (although one wonders when this is ever not the case, and what is the foolproof way to avoid misallocation).Still, one has to marvel at Hanson’s insistence that there is no meaningful difference between the ways people die. He implies that all deaths are equally tragic — so there is no difference, apparently, between a peaceful death and a violent one, or between a death in old age and one greatly premature. In a weird version of “blaming the victim,” Hanson implies that many of the people who have died since 9/11 are to blame for their own deaths, because they could have made choices like exercising, not smoking, and undergoing cryonic preservation. But of course, people who are murdered never get the chance to make or have these choices matter at all.This is part of the larger point Hanson misses: One certainly can doubt the severity of the threat posed by terrorism, and the wisdom of the U.S. response to it. But the September 11th attack was animated by ideas, and Hanson willfully ignores the implications of those ideas: The lives he would have us forget were lost in an attack against the very liberal order that allows Hanson to share his ideas so freely. It’s hard to imagine transhumanist discourse flourishing under the theocratic tyranny of sharia law. And if the planners of that attack had their way, that liberal order would be extinguished, as would the lives of many who now live under it — which would certainly alter even the calculus admitted by Hanson’s myopic utilitarianism.Thus the true backwardness of Hanson’s argument. While he may think he is making a trenchantly pro-humanist case for how insensitive and outrageous it is that we focus our emotions on some deaths much more than others, one wonders whether dulling our sensitivity to the deaths of the few can really be the best way to make us care about the deaths of the many. If we cannot feel outrage at what is shocking, can we still be moved by what is commonplace? If we do not mourn the loss of those who are close to us, how can we ever mourn the loss of those who are far?

The Blending of Humans and Robots

David Gelernter has written a characteristically thought-provoking essay about what guidance might be gleaned from Judaism for how human beings ought to treat “sophisticated anthropoid robots” with artificial intelligence powerful enough to allow them to respond to the world in a manner that makes them seem exactly like us. Taking his cue from Biblical and rabbinic strictures concerning cruelty to animals, he argues that because these robots “will seem human,” we should avoid treating them badly lest we become “more oblivious of cruelty to human beings.”This conclusion, which one might draw as well on Aristotelian as Biblical grounds, is a powerful one — and in a world of demolition derbies and “Will It Blend?,” where even a video of a washing machine being destroyed can go viral, it is hard to deny that Gelernter has identified a potentially serious issue. It was raised with great force in the “Flesh Fair” scenes of the 2001 movie A.I., where we see robots being hunted down, herded together, and subjected to various kinds of creative destruction in front of howling fans. Meanwhile, the robots look on quietly with what I have always found to be heartbreaking incomprehension.And yet, it also seems to me that the ringleader at the Flesh Fair, vicious though he is, is not entirely wrong when he harangues the crowd about the need to find a way to assert the difference between humans and robots in a world where it is becoming increasingly easy to confuse the two. And it is in this connection that I wonder whether Gelernter’s argument has sufficiently acknowledged the challenge to Jewish thought that is being posed by at least some of the advocates of the advanced artificial intelligence he is describing.Gelernter knows full well the “sanctity and ineffable value” that Judaism puts on human life, which is to say he knows that in Jewish thinking human beings are unique within creation. In such a framework, it is understandable why the main concern with animal (or robot) cruelty should be the harm it might do to “our own moral standing” or “the moral stature and dignity of human beings.” But the moral dignity of human beings and our uniqueness in creation is precisely what is coming under attack from transhumanists, as well as the less potent but more widespread forms of scientism and technophilia in our culture. Gelernter is certain that the robot will feel no pain; but what of those who would reply that they will “process” an electrical signal from some part of their bodies that will trigger certain kinds of functions — which is after all what pain “really” is? Gelernter is certain that these anthropoid robots will have no inner life, but what of those, such as Tor Nørretranders and Daniel Dennett, who are busy arguing that what we call consciousness is just “user illusion”?I don’t doubt that Gelernter could answer these questions. But I do doubt that his answers would put an end to all the efforts to convince us that after all we are simply “meat machines.” And if more and more we think of ourselves as “meat machines,” then what Gelernter calls the “pernicious incrementalism” of cruelty to robots that he is reasonably concerned about points in another direction as well: not that we start treating “thous” as “its,” but that in transforming “its” into “thous” we take all the moral meaning out of “human.”It probably should not surprise us that there are dangers of kindness to robots as well as cruelty, but the fact that it is so might prompt us to wonder about the reasons that seem to make going down this road so compelling. Speaking Jewishly, Gelernter might recall the lesson from the pre-twentieth-century accounts of the golem, the legends of pious men creating an artificial anthropoid that go back to the Talmud. Nearly from the start two things are clear about the golem: only the wisest and most pious could ever hope to make one, but the greatest wisdom would be to know how and not to do so.

Day 2 at H+ Summit: George Dvorsky gets serious

The 2010 H+ Summit is back underway here at Harvard, running even later than yesterday. After the first couple of talks, the conference launches into a more philosophical block, which promises a break in the doldrums of most of these talks so far. First up in this block is George Dvorsky (bio, slides, on-the-fly transcript), who rightly notes that ethical considerations have largely gone unmentioned so far at this conference. And how. He also notes in a tweet that “The notion that ethicists are not needed at a conference on human enhancement is laughable.” Hear hear.
Dvorsky’s presentation is primarily concerned with machine consciousness, and ensuring the rights of new sentient computational lifeforms. He’s not talking about robots, he says, like the ones we have today that are not sentient but are anthropomorphized to evoke our responses as if they were. (Again, see Caitrin Nicol in The New Atlantis on this subject.) Dvorsky posits that these robots have no moral worth. For example, he says, you may have seen this video before — footage of a robot that looks a bit like a dog and is subjected to some abuse:
Even though many people want to feel sorry for the robot when it gets kicked, Dvorsky says, they shouldn’t, because it has no moral worth. Only things with subjective awareness have moral worth. I’d agree that moral worth doesn’t inhere in such a robot. But as for subjective awareness as the benchmark, what about babies and the comatose, even the temporarily comatose? Do they have any moral worth? Also, it is not a simple matter to say that we shouldn’t feel sorry for the robot even if it doesn’t have moral worth. Isn’t it worth considering the effects on ourselves when we override our instincts and intuitions for empathy toward what seem to be other beings, however aptly-directed those feelings may be? Is protecting the rights of others entirely a matter of our rational faculties?
Dvorsky continues by describing problems raised by advancing the moral rights of machines. One, he says, is human exceptionalism. (And here the notion of human dignity gets its first brief mention at the conference.) Dvorsky derides human exceptionalism as mere “substrate chauvinism” — the idea that you must be made of biological matter to have rights.
He proposes that conscious machines be granted the same rights as human beings. Among these rights, he says, should be the right not to be shut down, and to own and control their own source code. But how does this fit in with the idea of “substrate chauvinism”? I thought the idea was that substrate doesn’t matter. If it does matter — to the extent that these beings have special sorts of rights like owning their own source code that not only don’t apply but have no meaning for humans — doesn’t this mean that there is some moral difference for conscious machines that must be accounted for rather than scoffed off with the label “substrate chauvinism”?
George Dvorsky has a lot of work to do with resolving incoherences in his approach to these questions. But he deserves credit for trying, and for offering the first serious, thoughtful talk at this conference. The organizers should have given far more emphasis and time to presenters like him. Who knows how many of the gaps in Dvorsky’s argument might have been filled if he had been given more than the ten-minute slot that they’re giving everybody else here with a project to plug.