Finding Humor in Medicine

Dmitrijs Bindemanis via Shutterstock

One morning, I checked in on an 82-year-old female who was admitted overnight after falling in her home. She looked like any other elderly woman: gray
hair, thin legs and arms, and wrinkled skin. Yet she lacked the frailty and exhaustion that sick older people often exhibit and wore a faint smirk — the
angles of her lips curved upwards just enough to discern an optimistic disposition. As I knocked and entered she startled and pulled the covers up to her
chin.

I asked the woman how she felt, if there was anything she needed
and, in turn, she answered and asked me how I was doing. She seemed, in other words, to be skillfully participating in the sort of anodyne introductory conversation that people have when they first meet in such settings. But when I asked her if she knew where she was, she replied “no.” It turns out she had dementia, a slow and progressive deterioration of
mental function, and could tell me almost nothing about life at home. In response to more detailed questions I only received a nod or a barely audible
“yes.” Even when I kneeled close and raised my voice, assuming she was hard of hearing, I received similar responses. Disease had mangled her memory and
her ability to socialize appropriately.

When we rounded, I presented her to the medical team and
entered the room to examine and speak with her. Our attending physician asked the patient how she was feeling. Frustrated by her inability to answer my
questions, the patient replied “sad” and exclaimed “I could really use a joke to lift my spirits!” It’s a request that we so rarely hear in a hospital;
everyone wants food, or drink, or medicine, but few people request a joke. The attending asked me to tell the patient a good joke after rounds.

I’m not skilled at the art of joke-telling — it requires the appropriate joke at the appropriate time with the appropriate delivery. And this particular
situation did not lend itself to a complex yarn with a long backstory. It needed to be simple. So I thought a knock-knock joke would do the trick; after all, everyone knows how knock-knock jokes work. Here’s the one I planned to use:

Knock, knock
Who’s there?
Rufus
Rufus, who?
Rufus the highest part of the house.

Granted, it is corny, but I was sure it would elicit a chuckle from an old lady with dementia. Then I told her the joke:

Me: Knock, knock
Her: Knock, knock
Me (jokingly): Who’s there?
Her: Who’s there?
Me (attempting to start over again): Knock, knock
Her: Knock, knock

She wasn’t doing this playfully; she did it because she could not understand or remember how knock-knock jokes work. When we all got back to the
physician workroom, I told the medical team about this encounter and we had a good laugh. Indeed, dementia is the theme of many jokes in medicine. Have you heard the one about the doctor and the Alzheimer’s patient?

“I’m sorry to tell you this but you have cancer,” the doctor told her patient.
“I do?”
“Yes, but that’s not all… you also have Alzheimer’s disease,” said the doctor.
“I do?”
“Yes,” nodded the physician.
The patient beamed and said: “Oh well, at least I don’t have cancer.”

Or, maybe you’ve heard this other good joke about dementia:

The doctor tells his patient: “Well I have good news and bad news…”
The patient says, “Lay it on me Doc. What’s the bad news?”
“You have Alzheimer’s disease.”
“Good heavens! What’s the good news?”
“You can go home and forget about it!”

But what is so funny about these jokes? Aren’t we laughing about a debilitating disease that tears apart lives? In that room I entered was a woman who
could not participate in a joke that even the youngest and simplest child could understand. Why is some part of this funny?

*   *   *

In another medical unit, we took care of an 80-year-old with metastatic cancer nearing the end of his life. He needed a breathing machine to pump air into
his lungs and was being fed through a tube which connected his stomach to a container of liquid nutrients. He lay in bed without moving, occasionally
blinking his eyes. Because he lay in bed all day he developed pressure ulcers. The pressure from
an immobile person’s weight on the bed causes skin breakdown. This leads to serious and life-threatening infections as well as terrible pain. In order to
slow the breakdown process, nurses constantly change the position of the immobile patient, rolling him or her every few hours and putting creams and
ointments on the sores.

During rounds, I helped the nurses and the attending physician to examine the patient’s pressure sores. When we turned the patient, his hospital
gown opened exposing his bare derrière. I bent down to look closely at his sacrum, an anatomical spot above his anal opening, which contained multiple
ulcers. As I got close, the patient let out a huge fart. I almost burst out laughing but abruptly held myself back when I saw the attending physician,
staid and focused on the ulcer. Why did I nearly laugh at flatulence from a patient who was so close to death that he could not control his bowels? Should
I laugh at that?

Irreverent, dark, depressing, and, yes, immature humor pervades the medical profession. We all tell each other funny stories like these when we get
together. I remember one resident telling me a knee-slapper about a demented patient throwing feces at her. It’s funny, in an upsetting way.

Multiple days a week, physicians witness some of the darkest moments that human beings experience in their lives: cancer diagnoses, deathalcoholismdrug overdoseschild abuse, neglect and abandonment, depression, horrific trauma, progressive physical and mental
disease, stillbirths, strokes, and more. Humor is one of the ways physicians deal with this barrage of depressing encounters.

But why? Perhaps to understand this response in the relatively mild context of the hospital, we can examine the most extreme example of humor in the face
of suffering — the Jewish experience before and during the Holocaust.

Ruth Wisse, the Martin Peretz Professor of Yiddish Literature and Professor of Comparative Literature at Harvard University, wrestles with the idea of tragic humor
in No Joke, her wonderful 2013 book on Jewish humor. Professor Wisse
notes that during tumultuous times, Jewish humor flourished because of “an increased need for entertainment that would distract or temporarily release the
tension, and offer consolation.” One Jewish comedian in particular, Shimon Dzigan, exemplified
this concept during the early twentieth century in Eastern Europe, playing characters on stage in sketches which poked fun at local Polish political
figures and even German leaders. In explaining why he sought humor in dark times, Dzigan explained, “I have no answer. I can only say that perhaps
because we subconsciously felt that our verdict was sealed and our fate unavoidable, we consciously wished to shout it down and drown it out. With
effervescent joy we wanted to drive off the gnawing sadness, the dread and fear that nested deep inside us.” Indeed, there is a gaping chasm between how
those who joke about serious matters actually feel and what they laugh about. The laughing isn’t merely a cover for their feelings but a way of making
those feelings less unpleasant and less controlling. Wisse eloquently observes: “If cognitive dissonance is caused by a divergence between convictions and
actuality, and if humor attempts to exploit that discomfort, no one was ever so perfectly placed to joke as were Jews under Hitler and Stalin.” Wisse also
provides an entertaining example of a joke during the Holocaust:

Two acquaintances meet on the street. “It’s good to see you back,” says the first.

“I hear that conditions in the concentration camp are horrible.” “Not at all,” replies the second. “They wake us at 7:30. Breakfast with choice of coffee
or cocoa is followed by sports or free time for reading. Then a plentiful lunch, rest period, games, a stroll, and conversation until dinner, the main meal
of the day. This is followed by entertainment, usually a movie…”

The first man is incredulous. “Really! The lies they spread about the place! I recently ran into Klein, who told me horror stories.”

“That’s why he’s back there,” nods the second.

Doctors, to be sure, are not in a position that even closely resembles the position of Jews during the dark days of the twentieth century. But the
humor that physicians use to deal with the things that they see is similar. There is a pit of Weltschmerz in medicine as well; a divergence between
convictions and actuality. Most doctors enter medicine with a desire to help and to heal. But medicine necessarily deals with death and failure and all the
emotions that come with them. To exploit and, at the same time, cloak that discomfort we resort to laughing at the actions of demented patients or inappropriately
commenting about a patient’s impending death. Whether right or wrong, laughter transiently drives out the gnawing sadness, the dread, and the fear that,
perhaps, one day we will be that patient.

What Total Recall can Teach Us About Memory, Virtue, and Justice

The news that an American woman has reportedly decided to pursue plastic surgery to have a third breast installed may itself be a subject for discussion on this blog, and will surely remind some readers of the classic 1990 science fiction movie Total Recall.

As it happens, last Thursday the excellent folks at Future Tense hosted one of their “My Favorite Movie” nights here in Washington D.C., playing that very film and holding a discussion afterwards with one of my favorite academics, Stanford’s Francis Fukuyama. The theme of the discussion was the relationship between memory and personal identity — are we defined by our memories?

A face you can trust

Much to my dismay the discussion of this topic, which is of course the central theme of the movie, strayed quite far from the details of the film. Indeed, in his initial remarks, Professor Fukuyama quoted, only to dismiss, the film’s central teaching on the matter, an assertion made by the wise psychic mutant named Kuato: “You are what you do. A man is defined by his actions, not his memory.”

This teaching has two meanings; the first meaning, which the plot of the movie has already prepared the audience to accept and understand when they first hear it, is that the actions of a human being decisively shape his character by inscribing habits and virtues on the soul.

From the very beginning of the movie, Quaid (the protagonist, played by Arnold Schwarzenegger) understands that things are not quite right in his life. His restlessness comes from the disproportion between his character, founded on a lifetime of activity as a secret agent, and the prosaic life he now finds himself in. He is drawn back to Mars, where revolution and political strife present opportunities for the kinds of things that men of his character desire most: victory, glory, and honor.

That Quaid retains the dispositions and character of his former self testifies to how the shaping of the soul by action takes place not by storing up representations and propositions in one’s memory, but by cultivating in a person virtuous habits and dispositions. As Aristotle writes in the Nicomachean Ethics, “in one word, states of character arise out of like activities.”

The second meaning of Kuato’s teaching concerns not the way our actions subconsciously shape our character, but how our capacity to choose actions, especially our capacity to choose just actions, defines who we are at an even deeper level. Near the end of the movie, after we have heard Kuato’s teaching, we learn that Hauser, Quaid’s “original self,” was an unrepentant agent of the oppressive Martian regime, and hence an unjust man. Quaid, however, chooses to side with the just cause of the revolutionaries. Though he retains some degree of identity with his former self — he continues to be a spirited, courageous, and skillful man — he has the ability to redefine himself in light of an impartial evaluation of the revolutionaries’ cause against the Martian regime, an evaluation that is guided by man’s natural partiality toward the just over the unjust.

*   *   *

The movie’s insightful treatment of the meaning and form of human character comes not, however, from a “realistic” or plausible understanding of the kinds of technologies that might exist in the future. It seems quite unlikely that we could ever have technologies that specifically target and precisely manipulate what psychologists would call “declarative memory.” In fact, the idea of reprogramming declarative memory in an extensive and precise way seems far less plausible than manipulating a person’s attitudes, dispositions, and habits — indeed, mood-altering drugs are already available.

Professor Fukuyama also raised the subject of contemporary memory-altering drugs. (This was a topic explored by the President’s Council on Bioethics in its report Beyond Therapy, published in 2003 when Fukuyama was a member of the Council.) These drugs, as Professor Fukuyama described them, manipulate the emotional significance of traumatic memories rather than their representational or declarative content. While Quaid retained some of the emotional characteristics of his former self despite the complete transformation of the representational content of his memory, we seem poised to remove or manipulate our emotional characteristics while retaining the same store of memories.

What lessons then can we draw from Total Recall’s teaching concerning memory, if the technological scenario in the movie is, as it were, the inverse of the projects we are already engaged in? It is first of all worth noting that the movie has a largely happy ending — Quaid chooses justice and was able to successfully “free Mars” (as Kuato directed him to do) through resolute and spirited action made possible by the skills and dispositions he developed during his life as an agent of the oppressive Martian regime.

Quaid’s siding with the revolutionaries over the Martian regime was motivated by the obvious injustice of that regime’s actions, and the natural emotional response of anger that such injustice instills in an impartial observer. But, as was noted in the discussion of memory-altering drugs after the film, realistic memory-altering drugs could disconnect our memories of unjust acts from the natural sense of guilt and anger that ought to accompany them.

Going beyond memory-altering drugs, there are (somewhat) realistic proposals for drugs that could dull the natural sense of spiritedness and courage that might lead a person to stand up to perceived injustice. Taken together, these realistic proposals would render impossible precisely the scenario envisioned in Total Recall, with memory-altering drugs that alter the emotional character of actions dulling our sense of their justice or injustice, and other drugs that dull our capacity to develop those qualities of soul like spiritedness and courage that would enable us to respond to injustice.

What science fiction can teach us about technology and human flourishing does not depend on its technical plausibility, but on how it draws out the connections truths about human nature and politics by putting them in unfamiliar settings. Notwithstanding Professor Fukuyama’s dismissal of the film, the moral seriousness with which Total Recall treats the issues of virtue and justice make it well worth viewing, and re-viewing for thoughtful critics of the project to engineer the human soul.

a revolution I can get behind!

The Power of the Doodle: Improve Your Focus and Memory:

Recent research in neuroscience, psychology and design shows that doodling can help people stay focused, grasp new concepts and retain information. A blank page also can serve as an extended playing field for the brain, allowing people to revise and improve on creative thoughts and ideas.
Doodles are spontaneous marks that can take many forms, from abstract patterns or designs to images of objects, landscapes, people or faces. Some people doodle by retracing words or letters, but doodling doesn’t include note-taking. ‘It’s a thinking tool,’ says Sunni Brown, an Austin, Texas, author of a new book, ‘The Doodle Revolution.’ It can affect how we process information and solve problems, she says.

The Doodle Revolution! Yes!

I doodled my way through my education — always abstract patterns, usually a kind of cross-hatching — and almost never took notes. This puzzled my teachers, but I always remembered what I heard in class better when I doodled. 

When I was in college, I spent an entire semester developing an immensely intricate doodle on the back of one of my notebooks. When I finally filled in the last corner, on the last day of class, I sat back and looked with satisfaction on my achievement. Then I felt a tap on my shoulder. A guy who had been sitting behind me all term said, “I’ll give you five bucks for that.” So I carefully tore off the back cover and exchanged it for his fiver. We both went away happy. 

holes in the fabric

One of the oddest moments of my youth came soon after I bought the LP pictured above. I ran out to get it after I heard on the radio a song from it, called “Vincent,” which struck my fourteen-year-old self as the most profound and artful and insightful and poetic thing I had ever encountered. And then, listening to the whole record, I was blown away by the title song and wanted to tell everyone about it … only to discover that everyone already knew about it and had been listening to it on every pop radio station in town over and over and over again so that they had been sick of it for some months already. But I had never heard it until I put the LP on my turntable.

I could not account for this then and cannot now. I listened to the radio as much as any kid my age: I pleaded with my grandmother to let me control the car radio whenever we were out and about; I had a small transistor radio I kept by my bed to listen to every night; I even strapped that transistor to the handlebars of my bike so I could listen as I rode around the neighborhood. And yet, somehow, in utter defiance of probability, I had never managed to have the radio on when “American Pie” was playing.

I experienced something slightly similar today when someone on Twitter linked to this post on a sportswriter named Gary Smith, who is evidently Kind of a Big Deal. I mean, just read the post. The guy has won every journalistic award a person could win. He’s every sportswriter’s writerly hero (well, almost). But I have never heard his name and as far as I know have never read a word he’s written. And yet I’m a reasonably serious sports fan and read a good deal about sports. How could I have altogether missed Gary Smith?

I find these gaps in experience, holes in the fabric of knowledge and cultural connection, oddly fascinating. The other big one I can think of involves Joni Mitchell’s song “River,” which, despite its being one of her most-covered songs, and despite my having owned several Joni Mitchell albums when I was young, I had never heard until about five years ago — almost forty years after its release. But of course, these gaps I have mentioned here I can mention only because they’ve been closed. Who knows how many other songs or writers or poems or whatever I’ve missed, what essential elements of the experience of my generation have passed me by and left me unwittingly denied some bond, some link?

And what about you, my friend? What about you?

Examining the Moral Meaning of Memory

The true moral significance of memory alteration is not a simple thing to understand, and cannot be inferred from basic observations about its reliability and potential manipulability. Jonah Lehrer, in his recent Wired magazine article on memory — which I earlier discussed heredoes claim to be genuinely interested in the ethical questions raised by memory alteration:

Would the President’s Council [on Bioethics] have the same reaction to memory training? What about an more effective form of talk therapy? Or is it simply the idea of an amnesiac pill that we find so Orwellian and frightening? If so, why? We take pills to cheer us up. What’s wrong with taking a pill that might get at the root cause of the sadness? These aren’t rhetorical questions – I’m honestly interested in the answers.

But it’s hard to take Lehrer’s expressed interest seriously when this is the next thing he says: “In the meantime, progress continues apace. (What Feynman dismissively said about philosophers of science is also true of bioethicists, for better or worse: they are to scientists what ornithologists are to birds.)” So is Lehrer honestly interested in bioethics or isn’t he?Unfortunately, it seems he isn’t. The questions he poses as apparently obvious rejoinders to the ethical inquiry in Beyond Therapy are all in fact addressed directly and plainly in the memory section of the report. To wit, here are four excerpts from the report:

We also know that individuals ‘naturally’ edit their memory of traumatic or significant events-both giving new meaning to the past in light of new experiences and in some cases distorting the past to make it more bearable. The question before us is how or whether new biotechnical interventions alter this inborn capacity to refine, reshape, and edit the way we remember the past.What could be wrong with, or even just disquieting about, wanting to feel better about ourselves and our lives, and availing ourselves of the necessary assistance in doing so? If we may embrace psychotherapy for the same purpose, why should we not embrace mood-brighteners, especially if they are not only safe but also cheaper and more effective than ‘talk therapy’? Only a person utterly at peace with the world and content with himself would be beyond temptation at the prospect of having his troubles effortlessly eased….there are many people whose deep psychic distress precludes meeting obligations and forming close relationships, and for whom the proper use of mood-brighteners is the blessed gift that can restore to them the chance for a full and flourishing life….many Holocaust survivors managed, without pharmacological assistance, to live fulfilling lives while never forgetting what they lived through. At the same time, many survivors would almost certainly have benefited from pharmacological treatment.

And so forth. The Council’s entire report is characterized by this kind of effort to explore and present both the potential good and bad of biotechnological advancement, without firmly concluding in one direction or the other. Certainly there is reasonable room to argue with the analysis. But it’s hard to take seriously Lehrer’s “hey, I’m just asking some questions and I’m really interested in the answers” shtick when it seems based on a near-total lack of knowledge of the answers the ostensible opponents have already given, and is followed by a claim that those answers are actually irrelevant anyway.—Of course, while Lehrer professes interest in the bioethical questions raised by memory alteration, he has clearly already staked out a position in the debate in favor of memory alteration. The heart of his argument seems to be that, as he puts it, “we already tweak our memories — we just do it badly.”One can get a sense of what’s wrong with this argument by seeing how quickly it devolves into this: “there is no clear line between the tweaks of ‘biotechnology’ and the changes that unfold every time we remember anything.” This is perhaps the most common argument in the transhumanist playbook. It goes basically like this: X new biotechnical intervention will totally change everything, so it’s great and we should embrace it — and there’s no reason not to because it’s actually no different from what we’re doing already.This line of argument is linked to another favorite theme of transhumanists and other pro-enhancement writers: who we are as human beings is the result of an unplanned, chaotic, and messy sequence of events — whether those events were in our evolutionary past, shaping our genetic heritage, or just things that happened to us during our own lifetimes that we would rather not remember. Sometimes, as with Allen Buchanan’s discussion of evolution and human nature, the arguments raise deeply important questions about the moral meaning of human nature. But Lehrer’s application of neuroscience to the ethics of memory alteration is just a misunderstanding of the ethically significant questions.Real ethical reflection on these issues would not try to dismiss them with one or two stale tropes. The personal, moral, and emotional significance of memory does not depend on it representing past experiences with perfect factual accuracy. And just because there are natural processes for “re-constructing” our past experiences, it by no means follows that techniques for purposefully ablating memories are morally uncontroversial. If we already tweak our memories, it seems just as possible that we could already sometimes do it well as do it badly. One would hope that in any case the goal would be to better understand the personal and moral significance of memories, and to learn how to integrate them into the broader meaning of our lives.

Jonah Lehrer’s Errors on Memory and Forgetting

About a month ago, Wired magazine published a widely discussed article on a scientific breakthrough that will have huge implications for psychotherapy, bioethics, and human self-understanding: apparently, memory is not perfect.The author — Jonah Lehrer, the popular writer on neuroscience — reports on some scientific findings regarding the reconsolidation theory of memory retrieval, which holds that every time a memory is recalled, the brain needs to recreate the memory, just as it did when the memory was originally formed. He quotes one of the researchers describing his work in terms of Thomas Kuhn, saying that he is overturning “a very stubborn paradigm.” And Lehrer seems to agree with this characterization:

Once a memory is formed, we assume that it will stay the same. This, in fact, is why we trust our recollections. They feel like indelible portraits of the past.None of this is true. In the past decade, scientists have come to realize that our memories are not inert packets of data and they don’t remain constant. Even though every memory feels like an honest representation, that sense of authenticity is the biggest lie of all.

Reconsolidation theory and the research behind it are potentially important contributions to the neuroscientific study of memory. But Lehrer grossly exaggerates the significance of these findings by repeatedly trying to characterize them as novel and revolutionary when they are not.—The problem starts from Lehrer not making much effort to distinguish between the two big takeaways from this research, which are: (1) memory can be altered by the act of recollection; and therefore, (2) memory is fallible. The first part is reconsolidation theory itself. Lehrer presents some evidence that this idea has only recently entered the scientific mainstream, but as for its being revolutionary, well, he himself notes in a follow-up blog post that scientists have been conducting research in support of the idea for almost a hundred years. Moreover, as he also notes in the article, the idea has been basically assumed by psychotherapists for decades.It’s the second point that’s really supposed to be revolutionary, though: Lehrer uses as a foil the supposedly naïve conventional and ancient philosophical wisdom that human memory works like a videotape, accurately recording and replaying events. But he does this mainly by misrepresenting or misunderstanding the way others have thought about memory in the past.The first volley is fired at Plato, who, Lehrer says, “compared our recollections to impressions in a wax tablet.” But Plato, in the Theaetetus, discusses this model only to quickly reject it. More to the point, Plato does so precisely in an effort to explain why beliefs can be false and memories unreliable. The naïve assumption that memories are “inert packets of data” has certainly had its adherents over the years — most of them in the last century, really, when such metaphors came into vogue — but the idea that memories are fallible, and that they have a life of their own, is at least as old as philosophy and literature. Indeed, even without philosophical reflection there are certain self-evident aspects of memory that show us how it can be imperfect; memories are clearly less distinct than present experiences, and no one trusts their recollections to the same extent they trust their perception.Lehrer suggests that these developments in neuroscience constitute a transgressive and exciting challenge to entrenched beliefs about human nature. But there’s no apparent reason in this case for why neuroscience should be fighting with ordinary human self-understanding — indeed, this seems like a perfect case of neuroscience coming around to realizing and providing some biological explanation for a phenomenon that’s already very familiar.The fact that people have always known memory to be fallible still leaves unknown why this is so, and does not diminish the value of neuroscientific research that might help explain it. Moreover, this possible biological explanation for why memory can be inaccurate does not show that memory is arbitrary or always unreliable, or that memory cannot or does not have some strong relation to the truth.—These problems with Lehrer’s account would not be so important if not for the highly flawed ethical arguments he uses them to support. You see, another implication of this research is the possibility of creating drug-based therapies to erase the painful aspect of particular memories, or even the memories themselves. By way of supporting this possibility, depicting as naïve the idea that memory always is truthful becomes the basis for depicting as naïve the idea that memory ought to be truthful.In that blog post following up on his article, Lehrer gives the supposed ancient philosophical wisdom about memory a modern voice in the ethical analysis of this topic by the President’s Council on Bioethics in its 2003 report Beyond Therapy. As he describes it, the Council

declared the possibility of erasing traumatic memories deeply dangerous, and worried that it would lead to the unraveling of “moral responsibility” in society. After all, if we can choose to forget our pain, then what would prevent us from thoughtlessly inflicting pain on other people? “Without truthful memory, we could not hold others or ourselves to account for what we do and who we are,” the Council wrote. “Perhaps no one has a greater interest in blocking the painful memory of evil than the evildoer.”

This argument at first seems “perfectly reasonable” to Lehrer, since “even the worst memories serve an important purpose, allowing us to learn from the past.” But his agreement turns out to be rhetorical, for

the verdict of the Council, grounded in our ancient intuitions about memory, is also problematic. The main reason is straightforward: Although the Council repeatedly proclaims the importance of maintaining “authentic” memories, they failed to realize that such an ideal form of memory doesn’t exist. There is no such thing as immaculate recall….

But the word “authentic” does not even appear anywhere in the part of the Council’s report dealing with memory. Moreover, the Council itself acknowledges precisely the point about memory reconsolidation that Lehrer claims vitiates the Council’s analysis:

it is important to note that “stored memories” do not remain static. Every time we recall a memory, what gets stored after such acts of recollection is a different memory, altered on account of how we, in recollecting it, have “received” and reacted to it. Once encoded, memories can be altered by recall.

Lehrer is criticizing a straw man. Not only does the Council’s argument not presuppose some idealized notion of perfectly accurate memory, but there is no reason for it to. Would evildoers only have an interest in blocking painful memories, in themselves or their victims, if those memories were perfect? Does the fact that memories can change or be imperfect mean that they have absolutely no relation to the truth? Does it mean we have no ethical or emotional interest in them bearing some relation to the truth?Questions like these are the ones that the scientific discoveries Lehrer mentions really seem to raise, but Lehrer seems more interested in making bold bioethical pronouncements on the basis of neuroscientific findings than examining these tough bioethical questions. I’ll turn to comparing the Council’s analysis of these questions about the ethics of memory alteration with Lehrer’s analysis in my next post.Images: Eternal Sunshine of the Spotless Mind;
Photo Album (via Shutterstock);
Marcel Proust’s Remembrance of Things Past

on the will to remember

Mandy Brown:

A leaked slide suggests that Yahoo will shutdown Delicious. Gary Vaynerchuk announces that Cork’d will come to an end. Two years ago, Ma.gnolia experienced catastrophic data loss, taking thousands of bookmarks with it, mine included. Around the same time, Yahoo (sadly, a recurring player on this stage) killed Geocities. Dan Cederholm reminds us that very little on the web lasts forever. Indeed. . . .

This is not to excuse Yahoo’s behavior, nor is it to say that we will be able to save everything, even if our efforts are heroic. But no civilization has ever saved everything; acknowledging that fact does not obviate the need to try and save as much as we can. The technological means to produce an archive are not beyond our skills; sadly, right now at least, the will to do so is insufficient. Let’s hope that doesn’t last forever.

This is smart, and sobering. My friend Matt Frost — at least I think it was Matt — once pointed out that the problem of preserving information and transferring it to new media is one that we only need to solve once every decade or so, and in general that’s right. But what if we forget to to be as attentive as we should be?
When I talk to my students about orality and literacy, I point out to them that oral cultures are tremendously careful about preserving their memories accurately because they know that it only takes one generation of forgetfulness and then the whole of their past is lost. Perhaps we should start thinking in such terms.

books for the ages

Recently a meme flitted around the internet for a few days — a meme about books: “What,” whispered the meme, “are the Ten Books That Have Most Influenced You?” Or something like that; sometimes I have trouble hearing memes, because of the whispering and all. Also because they tend to bore me.I don’t know how to answer the meme’s question, but the question did get me thinking, for once, and what it got me thinking about is this: what books were most important to me at different stages of my life? That one I believe I can answer, at least up until fifteen years ago or so — this is the kind of thing that’s best assessed in retrospect (which is one reason why I’m not answering the meme’s original question). So check out this list:

Age 6: My favorite book then, and for years after, was The Golden Book of Astronomy — how I loved that book. It influenced me so deeply that until I was sixteen I knew that I would be an astronomer. What happened at age 16? Calculus.
Age 10: Robert A. Heinlein, Tunnel in the Sky. An interplanetary survivalist manifesto. I was ten. Enough said.Age 14: Arthur C. Clarke, Childhood’s End. My first tutor in philosophy, comparative religion, comparative mythology, and dystopian futurism. Also a ripping good read. Roughly contemporaneous with my discovery of Dark Side of the Moon. Not since has my mind been so thoroughly blown.Age 16: Loren Eiseley, The Night Country. My discovery that the essay could be an art form, and that interests in the sciences and in literature could be profitably and brilliantly combined. I read Eiseley’s complete works that year, I think, but the melancholy humor of The Night Country remained with me more strongly than anything else. The (widely anthologized) essay “The Brown Wasps” just devastated me.Age 20: William Faulkner, Absalom, Absalom! “The past is not dead, it is not even past.” History is tragedy. “Why do you hate the South?” — No, Shreve, you damned Canadian, you don’t understand. You don’t understand at all.Age 22: The Philosophy of Paul Ricoeur. Theology, theory, phenomenology, hermeneutics — all in the mind of one person?? Then anything is possible. Boundless intellectual vistas. I may make it through graduate school after all.Age 24: W. H. Auden, The Dyer’s Hand. Theology, poetry, history, myth — the poet as thinker. The poet as Christian. The Christian who, because he is a Christian, is thinking decades ahead of everyone else about the collapse of psychoanalysis, the end of Christendom, the dead-ends of late modernity. . . . A lifetime’s study commences now.
Age 30: Mikhail Bakhtin, The Dialogic Imagination. He knows everything — the whole of history. As he wrote near the end of his life, “There is neither a first nor a last word and there are no limits to the dialogic context (it extends into the boundless past and the boundless future). . . . At any moment in the development of the dialogue there are immense, boundless masses of forgotten contextual meanings, but at certain moments of the dialogue’s subsequent development along the way they are recalled and invigorated in new form (in a new context). Nothing is absolutely dead: every meaning will have its homecoming festival.” What Auden is as critic and poet for me, Bakhtin is as theorist and thinker.Age 35: Lesslie Newbigin, The Gospel in a Pluralist Society. If I could have every young and thoughtful Christian read one book, it would be this one. There’s no one quite like Newbigin — or perhaps it would be better to say that there have been few like him since Augustine: the bishop-missionary-theologian.Since then . . . well, I’ll tell you in a few more years. UPDATE: I’ve decided I’d be remiss if I didn’t add one more:
Age 38: W. H. Auden, “Horae Canonicae.” I had read these poems several times over the years, but it was only in my late 30s, as I was writing a book on Auden, that their true greatness began to dawn on me. They have permanently and profoundly shaped my understanding of what it means to be a human being living historically, and being accountable for one’s own history; and it is through these poems more than through anything else that I have come to understand the meaning of Good Friday.

the World Brain

Quotes and links at least I can do.

The whole human memory can be, and probably in a short time will be, made accessible to every individual. And what is also of very great importance in this uncertain world where destruction becomes continually more frequent and unpredictable, is this, that photography affords now every facility for multiplying duplicates of this – which we may call? – this new all-human cerebrum. It need not be concentrated in any one single place. It need not be vulnerable as a human head or a human heart is vulnerable. It can be reproduced exactly and fully, in Peru, China, Iceland, Central Africa, or wherever else seems to afford an insurance against danger and interruption. It can have at once, the concentration of a craniate animal and the diffused vitality of an amoeba.
This is no remote dream, no fantasy. It is a plain statement of a contemporary state of affairs. It is on the level of practicable fact. It is a matter of such manifest importance and desirability for science, for the practical needs of mankind, for general education and the like, that it is difficult not to believe that in quite the near future, this Permanent World Encyclopaedia, so compact in its material form and so gigantic in its scope and possible influence, will not come into existence.
. . . And its creation is a way to world peace that can be followed without any very grave risk of collision with the warring political forces and the vested institutional interests of today. Quietly and sanely this new encyclopaedia will, not so much overcome these archaic discords, as deprive them, steadily but imperceptibly, of their present reality. A common ideology based on this Permanent World Encyclopaedia is a possible means, to some it seems the only means, of dissolving human conflict into unity.
This concisely is the sober, practical but essentially colossal objective of those who are seeking to synthesize human mentality today, through this natural and reasonable development of encyclopaedism into a Permanent World Encyclopaedia.