happy

Yuval Noah Harari introduces his new book Sapiens:

We are far more powerful than our ancestors, but are we much happier? Historians seldom stop to ponder this question, yet ultimately, isn’t it what history is all about? Our understanding and our judgment of, say, the worldwide spread of monotheistic religion surely depends on whether we conclude that it raised or lowered global happiness levels. And if the spread of monotheism had no noticeable impact on global happiness, what difference did it make?

Let me just put my cards on the table and say that this entire paragraph is so nonsensical that it’s not even wrong. It is so conceptually confused that it has not, to borrow a phrase from C. S. Lewis, risen to the dignity of error.

To begin with, what in the world might it mean to say that happiness is “what history is all about”? History, as I and everyone else in the world except Harari knows, is “about” what has happened. And many things, I think it is fair to say, have happened other than happiness.

I truly can’t guess, with any confidence, what Harari means by that statement, but if I had to try I’d paraphrase it thus: The chief reason for studying history is to find out what made people happy and what didn’t. Lord, I hope that’s not what he means, but I fear it is.

And as for “what made people happy,” Harari wants to define that in terms of “global happiness levels.” And how are we supposed to evaluate those? Where would we get our data set? And — to ask a question that goes back to the earliest responses to Bentham’s utilitarianism — how do we count such stuff? Does one person’s horrific misery count the same as another person’s mild pleasure? Or do we add an intensity factor? Also, on the unhappiness scale, how might we compare a quick and painless death at age 19 to an extended agony of fatal illness at age 83?

I suspect Harari hasn’t thought much about these matters, but let’s try to go with him. Instead of considering something as amorphous as “monotheistic religion,” let’s focus on the militant Islam of today. It has clearly made many people very miserable; but it has equally clearly given other people great satisfaction. If the number of people who delight in militant Islam exceed the number of people made miserable by it, then do we conclude that militant Islam is a net contributor to “global happiness levels” and therefore something to be applauded? And what if the balance sheet comes out pretty level, so that global happiness has been neither appreciably increased nor appreciably decreased by militant Islam? Are we to conclude then that it really hasn’t “made a difference”? 

This little thought experiment also raises the question of whether happiness might be defined differently by different people in different cultures. Harari has this one covered. Some people, he tells us,

agree that happiness is the supreme good, but think that happiness isn’t just a matter of pleasant sensations. Thousands of years ago Buddhist monks reached the surprising conclusion that pursuing pleasant sensations is in fact the root of suffering, and that happiness lies in the opposite direction…. For Buddhism, then, happiness isn’t pleasant sensations, but rather the wisdom, serenity and freedom that come from understanding our true nature.

Ah, now we’re getting somewhere! Finally, time for a serious consideration of rival views of happiness! So here’s Harari’s response: “True or false, the practical impact of such alternative views is minimal. For the capitalist juggernaut, happiness is pleasure. Full stop.”

Full stop. The “capitalist juggernaut” has decided what happiness is — and, needless to say, resistance is futile — so we don’t need to think about it any more. We don’t even need to ask whether said juggernaut is equally powerful everywhere in the world, or whether, conversely, there are significant numbers of people who live in a different regime — even though the scale of the book is supposed to be global.

So here we have an argument that happiness is “what history is all about,” and that therefore everything that we do should be evaluated in terms of its contribution to “global happiness levels,” but that can’t be bothered to ask what happiness consists in. As I say: not even wrong. Miles from being even wrong.

The Righteous Mind and the Inner Ring

In his recent and absolutely essential book The Righteous Mind, Jonathan Haidt tries to understand why we disagree with one another — especially, but not only, about politics and religion — and, more important, why it is so hard for people to see those who disagree with them as equally intelligent, equally decent human beings. (See an excerpt from the book here.)

Central to his argument is this point: “Intuitions come first, strategic reasoning second. Moral intuitions arise automatically and almost instantaneously, long before moral reasoning has a chance to get started, and those first intuitions tend to drive our later reasoning.” Our “moral arguments” are therefore “mostly post hoc constructions made up on the fly, crafted to advance one or more strategic objectives.”

Haidt talks a lot about how our moral intuitions accomplish two things: they bind and they blind. “People bind themselves into political teams that share moral narratives. Once they accept a particular narrative, they become blind to alternative moral worlds.” “Moral matrices bind people together and blind them to the coherence, or even existence, of other matrices.” The incoherent anti-religious rant by Peter Conn that I critiqued yesterday is a great example of how the “righteous mind” works — as are conservative denunciations of universities filled with malicious tenured radicals.

So far so vital. I can’t imagine anyone who couldn’t profit from reading Haidt’s book, though it’s a challenge — as Haidt predicts — for any of us to understand our own thinking in these terms. Certainly it’s hard for me, though I’m trying. But there’s a question that Haidt doesn’t directly answer: How do we acquire these initial moral intuitions? — Or maybe not the initial ones, but the ones that prove decisive for our moral lives? I make that distinction because, as we all know, people often end up dissenting, sometimes in the strongest possible terms, from the moral frameworks within which they were raised.

So the question is: What triggers the formation of a “moral matrix” that becomes for a given person the narrative according to which everything and everyone else is judged?

I think that C. S. Lewis answered that question a long time ago. (Some of what follows is adapted from my book The Narnian: The Life and Imagination of C. S. Lewis.) In December of 1944, he gave the Commemoration Oration at King’s College in London, a public lecture largely attended by students, and Lewis took the opportunity of this “Oration” to produce something like a commencement address. He called his audience’s attention to the presence, in schools and businesses and governments and armies and indeed in every other human institution, of a “second or unwritten system” that stands next to the formal organization.

You discover gradually, in almost indefinable ways, that it exists and that you are outside it, and then later, perhaps, that you are inside it…. It is not easy, even at a given moment, to say who is inside and who it outside…. People think they are in it after they have in fact been pushed out of it, or before they have been allowed in; this provides great amusement for those who are really inside.

Lewis does not think that any of his audience will be surprised to hear of this phenomenon of the Inner Ring; but he thinks that some may be surprised when he goes on to argue, in a point so important that I’m going to put it in bold type, “I believe that in all men’s lives at certain periods, and in many men’s lives at all periods between infancy and extreme old age, one of the most dominant elements is the desire to be inside the local Ring and the terror of being left outside.” And it is important for young people to know of the force of this desire because “of all passions the passion for the Inner Ring is most skillful in making a man who is not yet a very bad man do very bad things.”

The draw of the Inner Ring has such profound corrupting power because it never announces itself as evil — indeed, it never announces itself at all. On these grounds Lewis makes a “prophecy” to his audience at King’s College: “To nine out of ten of you the choice which could lead to scoundrelism will come, when it does come, in no very dramatic colours…. Over a drink or a cup of coffee, disguised as a triviality and sandwiched between two jokes … the hint will come.” And when it does come, “you will be drawn in, if you are drawn in, not by desire for gain or ease, but simply because at that moment, when the cup was so near your lips, you cannot bear to be thrust back again into the cold outer world.”

It is by these subtle means that people who are “not yet very bad” can be drawn to “do very bad things” – by which actions they become, in the end, very bad. That “hint” over drinks or coffee points to such a small thing, such an insignificant alteration in our principles, or what we thought were our principles: but “next week it will be something a little further from the rules, and next year something further still, but all in the jolliest, friendliest spirit. It may end in a crash, a scandal, and penal servitude; it may end in millions, a peerage, and giving the prizes at your old school. But you will be a scoundrel.”

This, I think, is how our “moral matrices,” as Haidt calls them, are formed: we respond to the irresistible draw of belonging to a group of people whom we happen to encounter and happen to find immensely attractive. The element of sheer contingency here is, or ought to be, terrifying: had we encountered a group of equally attractive and interesting people who held very different views, then we too would hold very different views.

And, once we’re part of the Inner Ring, we maintain our status in part by coming up with those post hoc rationalizations that confirm our group identity and, equally important, confirm the nastiness of those who are Outside, who are Not Us. And it’s worth noting, as Avery Pennarun has recently noted, that one of the things that makes smart people smart is their skill at such rationalization: “Smart people have a problem, especially (although not only) when you put them in large groups. That problem is an ability to convincingly rationalize nearly anything.”

In “The Inner Ring” Lewis portrays this group affiliation in the darkest of terms. That’s because he’s warning people about its dangers, which is important. But of course it is by a similar logic that people can be drawn into good communities, genuine fellowship — that they can become “members of a Body,” as he puts it in the great companion piece to “The Inner Ring,” a talk called “Membership.” (Both are included in his collection The Weight of Glory.) This distinction is what his novel That Hideous Strength is primarily about: we see the consequences for Mark Studdock as he is drawn deeper and deeper into an Inner Ring, and the consequences for Mark’s wife Jane as she is drawn deeper and deeper into a genuine community. I can’t think of a better guide to distinguishing between the false and true forms of membership than that novel.

And that novel offers something else: hope. Hope that we need not be bound forever by an inclination we followed years or even decades ago. Hope that we can, with great discipline and committed energy, transcend the group affiliations that lead us to celebrate members of our own group (even when they don’t deserve celebration) and demonize or mock those Outside. We need not be bound by the simplistic and uncharitable binaries of the Righteous Mind. Unless, of course, we want to be.

Bonhoeffer and Technopoly

As the year 1942 drew to close, Dietrich Bonhoeffer — just months away from being arrested and imprisoned by the Gestapo — sat down to write out ein Rückblick — a look back, a review, a reckoning — of the previous ten years of German experience, that is, of the Nazi years.

This look back is also a look forward: it is a document that asks, “Given what has happened, what shall we now do?” And a very subtle and important section, early in the “reckoning,” raises the questions entailed by political and social success. How are our moral obligations affected when the forces we most strenuously resist come to power anyway?

Although it is certainly not true that success justifies an evil deed and shady means, it is impossible to regard success as something that is ethically quite neutral. The fact is that historical success creates a basis for the continuance of life, and it is still a moot point whether it is ethically more responsible to take the field like a Don Quixote against a new age, or to admit one’s defeat, accept the new age, and agree to serve it. In the last resort success makes history; and the ruler of history [i.e., God] repeatedly brings good out of evil over the heads of the history-makers. Simply to ignore the ethical significance of success is a short-circuit created by dogmatists who think unhistorically and irresponsibly; and it is good for us sometimes to be compelled to grapple seriously with the ethical problem of success. As long as goodness is successful, we can afford the luxury of regarding it as having no ethical significance; it is when success is achieved by evil means that the problem arises.

It seems to me that the question that Bonhoeffer raises here applies in important ways to those of us who struggle against a rising technocracy or Technopoly, even if we don’t think those powers actually evil — certainly not evil in the ways the Nazis were. But well-intentioned people with great power can do great harm.

Suppose, then, that we do not want Technopoly to win, to gain widespread social dominance — but it wins anyway (or has already won). What then? Bonhoeffer:

In the face of such a situation we find that it cannot be adequately dealt with, either by theoretical dogmatic arm-chair criticism, which means a refusal to face the facts, or by opportunism, which means giving up the struggle and surrendering to success. We will not and must not be either outraged critics or opportunists, but must take our share of responsibility for the moulding of history in every situation and at every moment, whether we are the victors or the vanquished.

So the opportunism of the Borg Complex is ruled out, but so too is huffing and puffing and demanding that the kids get off my lawn. Bonhoeffer’s reasons for rejecting the latter course are interesting: he thinks denunciation-from-a-distance is a failure to “take our share of responsibility for the moulding of history.” The cultural conditions are not what we would have them be; nevertheless, they are what they are, and we may not excuse ourselves from our obligations to our neighbors by pointing out that we have fought and lost and now will go home and shut the door. We remain responsible to the public world even when that world is not at all what it would be if we had our way. We have work to do. (Cue “Superman’s Song”, please.)

Bonhoeffer presses his point:

One who will not allow any occurrence whatever to deprive him of his responsibility for the course of history — because he knows that it has been laid on him by God — will thereafter achieve a more fruitful relation to the events of history than that of barren criticism and equally barren opportunism. To talk of going down fighting like heroes in the face of certain defeat is not really heroic at all, but merely a refusal to face the future.

But why? Why may I not wash my hands of the whole mess?

The ultimate question for a responsible man to ask is not how he is to extricate himself heroically from the affair, but how the coming generation is to live. It is only from this question, with its responsibility towards history, that fruitful solutions can come, even if for the time being they are very humiliating. In short, it is much easier to see a thing through from the point of view of abstract principle than from that of concrete responsibility. The rising generation will always instinctively discern which of these we make the basis of our actions, for it is their own future that is at stake.

In short: it’s not about me. It’s not about you. It’s about how the coming generation is to live. To “wash my hands of the whole mess” is to wash my hands of them, to leave them to navigate the storms of history without assistance. And even if the assistance I can give is slight and weak, I owe them that.

In his brilliant new biography of Bonhoeffer, Charles Marsh points out that “After Ten Years,” though addressed immediately to family and friends, is more deeply addressed to the German social elite from which Bonhoeffer came. And, Marsh suggests, what Bonhoeffer is calling for here is the rise of an “aristocracy of conscience.” Now that, it seems to me, is an elite worthy of anyone’s aspiration.

It is with these obligations to the coming generation in mind, I think, that we are to consider how to respond to the powers that reign in our world. It may be the case that those powers turn out to be less wicked than the ones Bonhoeffer had to confront; there are worse things than Technopoly, and many millions of people in this world have to face them. But if we are spared those, then so much the better for us — and so much less convincing are any excuses we might want to make for inaction.

trigger warnings and trust

So, to continue my earlier post:

Last semester I taught a course called “Confession and Autobiography,” which covered some of the many types of self-writing from Augustine to … Well, where should you conclude a course on that topic? After considerable reflection, I decided that I would choose Alison Bechdel’s Fun Home. I knew that some of the subject matter of the book might be a bit challenging for some of my students — this is Texas, after all, and Baylor is a Christian school, drawing on a more socially and culturally conservative pool of students than many schools do — but Fun Home is a remarkable book, rich and complex and resistant to simplistic readings (not least those that tend to come from the cultural left). I also knew the students were juniors and seniors and would likely have the maturity to handle those challenges, as long as I gave them the proper context.

That last clause is key. If you want to be a good teacher, in any environment, you have be willing to prepare your students for what you assign them. As I have commented before, the decision of what books to assign is morally fraught, and the more seriously you think at that stage the better prepared you’ll be when the time comes for reading and discussion. So, having thought and prayed when I was ordering books, I was ready to spend some time on the first day of class explaining why I wanted them to read Fun Home.

But here’s the thing: there’s only so much you can do in advance. You can offer some kind of abstract description of what’s in a book, but such descriptions are necessarily inadequate at best and at worst profoundly distorting. So I wasn’t altogether surprised when, as the time for discussing Fun Home drew closer, that I had a couple of students expressing some anxiety about whether it was the kind of thing they wanted to read. (I might add that this was a course in Baylor’s Great Texts program, which students sign up for because they want to study the lastingly great, not the trend du jour.) And while I tried to reassure them, I knew that, in the end, the proof could only be in the pudding: it would only be after they had read the book and discussed it, under my leadership, in class that they could know whether the book was worthy of their time, and any discomfort it might cost them.

So really what I was saying to these students was: Please trust me. And even as I was saying that (though not exactly in those words) I was aware that Baylor students don’t know me. I had been at Wheaton College for 29 years, and therefore was a thoroughly known entity. Any first-year student there taking a course from me could talk to dozens of other students who had taken classes from me and could say — I hope! — “He’s a good guy, you can trust him.” But at Baylor I’m the new guy.

Now, as it turns out, there were three students in that class who had had a class from me last fall. And maybe — I don’t know — maybe they reassured the concerned students. All I know for sure is that I took half-an-hour out from one class meeting just to hear my students’ thoughts about reading the book, and got a lot of great feedback on the culture of the Great Texts program at Baylor. Then, when we actually got into Fun Home, we had some of our best discussions of the semester. The pieces of the puzzle, or so it seemed to me from the head of the table, seemed to fall beautifully into place. And I got two really outstanding term papers on Fun Home.

All of which — and here’s where I’m heading with both of these posts — shows how hopelessly misbegotten the whole idea of “trigger warnings” is. Even aside from the widespread failure, in discussions of this topic, to distinguish between (a) triggers experienced by people who have undergone severe trauma and (b) the discomfort experienced by anyone who’s encountering new and challenging ideas, there is a still deeper problem: a failure to realize that just as important as what you read is whom you read it with — the social and personal context in which you experience and discuss and reflect on a book.

A list of troublesome “topics” — basically, tagging books with simplistic descriptions — is an utter trivialization of all these matters. Any teachers who think that they have met their moral responsibilities to students by loading their syllabuses with such tags — and any institutions who  find such tags adequate — have grossly misunderstood what education is. And that would be true even if such tags could adequately capture the ways in which a given theme (sexual violence, say) is treated in a given work of art, which they can’t.

If you trust your teacher and your fellow students, then you can risk intellectual encounters that might be more daunting if you were wholly on your own. That trust, when it exists, is grounded in the awareness that your teacher desires your flourishing, and that that teacher and your fellow students share at least some general ideas about what that flourishing consists in. Which is why, as I pointed out in my previous post and as Damon Linker has also just acknowledged, colleges and universities with distinctive religious commitments can be more open to many kinds of challenging ideas — including those from the past! — than their secular counterparts. Shared commitments build mutual trust, and there are few things more needful for those of us seeking knowledge and wisdom in academic communities.

the confidence of the elect

Right after I wrote my last post I came across an interestingly related one by Tim Parks:

No one is treated with more patronizing condescension than the unpublished author or, in general, the would-be artist. At best he is commiserated. At worst mocked. He has presumed to rise above others and failed. I still recall a conversation around my father’s deathbed when the visiting doctor asked him what his three children were doing. When he arrived at the last and said young Timothy was writing a novel and wanted to become a writer, the good lady, unaware that I was entering the room, told my father not to worry, I would soon change my mind and find something sensible to do. Many years later, the same woman shook my hand with genuine respect and congratulated me on my career. She had not read my books.

Why do we have this uncritical reverence for the published writer? Why does the simple fact of publication suddenly make a person, hitherto almost derided, now a proper object of our admiration, a repository of special and important knowledge about the human condition? And more interestingly, what effect does this shift from derision to reverence have on the author and his work, and on literary fiction in general?

But Parks’s key point is not that people generally change their attitudes towards a writer once he or she gets published — the writer changes too:

I have often been astonished how rapidly and ruthlessly young novelists, or simply first novelists, will sever themselves from the community of frustrated aspirants. After years fearing oblivion, the published novelist now feels that success was inevitable, that at a very deep level he always knew he was one of the elect (something I remember V.S. Naipaul telling me at great length and with enviable conviction). Within weeks messages will appear on the websites of newly minted authors discouraging aspiring authors from sending their manuscripts. They now live in a different dimension. Time is precious. Another book is required, because there is no point in establishing a reputation if it is not fed and exploited. Sure of their calling now, they buckle down to it. All too soon they will become exactly what the public wants them to be: persons apart, producers of that special thing, literature; artists.

Notice that this is another major contributor to the problem of over-writing and premature expressiveness that I mentioned in my post: the felt need to sustain and consolidate an established reputation.

And then there’s the sense that most successful people have — and, again, need to have — that their success is not only deserved but inevitable. Immediately after reading this essay by Parks I read an interview with Philip Pullman in which he plays to the type that Parks identifies:

Yet on one thing, Pullman’s faith is profound and unshakeable. He’s now in his mid-60s, and though he thinks about death occasionally, it never wakes him up in a sweat at night. ‘I’m quite calm about life, about myself, my fate. Because I knew without doubt I’d be successful at what I was doing.’ I double-take at this, a little astounded, but he’s unwavering. ‘I had no doubt at all. I thought to myself, my talent is so great. There’s no choice but to reward it. If you measure your capacities, in a realistic sense, you know what you can do.’

Note the easy elision here between “knowing what you can do” and “knowing you’ll be recognized and rewarded for it.” If talent is so reliably rewarded, then I don’t have to consider the possibility that my neighbor is getting less than he deserves — or that I’m getting more.

These reflections aren’t just about other people. How I think they apply to me is something I want to get to in another post.

Carr on automation

If you haven’t done so, you should read Nick Carr’s new essay in the Atlantic on the costs of automation. I’ve been mulling it over and am not sure quite what I think.

After describing two air crashes that happened in large part because pilots accustomed to automated flying were unprepared to take proper control of their planes during emergencies, Carr comes to his key point:

The experience of airlines should give us pause. It reveals that automation, for all its benefits, can take a toll on the performance and talents of those who rely on it. The implications go well beyond safety. Because automation alters how we act, how we learn, and what we know, it has an ethical dimension. The choices we make, or fail to make, about which tasks we hand off to machines shape our lives and the place we make for ourselves in the world. That has always been true, but in recent years, as the locus of labor-saving technology has shifted from machinery to software, automation has become ever more pervasive, even as its workings have become more hidden from us. Seeking convenience, speed, and efficiency, we rush to off-load work to computers without reflecting on what we might be sacrificing as a result.

And late in the essay he writes,

In schools, the best instructional programs help students master a subject by encouraging attentiveness, demanding hard work, and reinforcing learned skills through repetition. Their design reflects the latest discoveries about how our brains store memories and weave them into conceptual knowledge and practical know-how. But most software applications don’t foster learning and engagement. In fact, they have the opposite effect. That’s because taking the steps necessary to promote the development and maintenance of expertise almost always entails a sacrifice of speed and productivity. Learning requires inefficiency. Businesses, which seek to maximize productivity and profit, would rarely accept such a trade-off. Individuals, too, almost always seek efficiency and convenience. We pick the program that lightens our load, not the one that makes us work harder and longer. Abstract concerns about the fate of human talent can’t compete with the allure of saving time and money.

Carr isn’t arguing here that the automating of tasks is always, or even usually, bad, but rather than the default assumption of engineers — and then, by extension, most of the rest of us — is that when we can automate we should automate, in order to eliminate that pesky thing called “human error.”

Carr’s argument for reclaiming a larger sphere of action for ourselves, for taking back some of the responsibilities we have offloaded to machines, seems to be twofold:

1) It’s safer. If we continue to teach people to do the work that we typically delegate to machines, and do what we can to keep those people in practice, then when the machines go wrong we’ll have a pretty reliable fail-safe mechanism: us.

2) It contributes to human flourishing. When we understand and can work within our physical environments, we have better lives. Especially in his account of Inuit communities that have abandoned traditional knowledge of their geographical surroundings in favor of GPS devices, Carr seems to be sketching out — he can’t do more in an essay of this length — an account of the deep value of “knowledge about reality” that Albert Borgmann develops at length in his great book Holding on to Reality.

But I could imagine people making some not-obviously-wrong counterarguments — for instance, that the best way to ensure safety, especially in potentially highly dangerous situations like air travel, is not to keep human beings in training but rather to improve our machines. Maybe the problem in that first anecdote Carr tells is setting up the software so that in certain kinds of situations responsibility is kicked back to human pilots; maybe machines are just better at flying planes than people are, and our focus should be on making them better still. It’s a matter of properly calculating risks and rewards.

Carr’s second point seems to me more compelling but also more complicated. Consider this: if the Inuit lose something when they use GPS instead of traditional and highly specific knowledge of their environment, what would I lose if I had a self-driving car take me to work instead of driving myself? I’ve just moved to Waco, Texas, and I’m still trying to figure out the best route to take to work each day. In trying out different routes, I’m learning a good bit about the town, which is nice — but what if I had a Google self-driving car and could just tell it the address and let it decide how to get there (perhaps varying its own route based on traffic information)? Would I learn less about my environment? Maybe I would learn more, if instead of answering email on the way to work I looked out the window and paid attention to the neighborhoods I pass through. (Of course, in that case I would learn still more by riding a bike or walking.) Or what if I spent the whole trip in contemplative prayer, and that helped me to be a better teacher and colleague in the day ahead? I would be pursuing a very different kind of flourishing than that which comes from knowing my physical environment, but I could make a pretty strong case for its value.

I guess what I’m saying is this: I don’t know how to evaluate the loss of “knowledge about reality” that comes from automation unless I also know what I am going to be doing with the freedom that automation grants me. This is the primary reason why I’m still mulling over Carr’s essay. In any case, it’s very much worth reading.

to live and die in the Anthropocene

I’m not quite sure what to do with this essay by Roy Scranton on learning to live — or rather, learning to die — in the Anthropocene. The heart of the essay may be found in its concluding paragraphs:

The biggest problem climate change poses isn’t how the Department of Defense should plan for resource wars, or how we should put up sea walls to protect Alphabet City, or when we should evacuate Hoboken. It won’t be addressed by buying a Prius, signing a treaty, or turning off the air-conditioning. The biggest problem we face is a philosophical one: understanding that this civilization is already dead. The sooner we confront this problem, and the sooner we realize there’s nothing we can do to save ourselves, the sooner we can get down to the hard work of adapting, with mortal humility, to our new reality.

The choice is a clear one. We can continue acting as if tomorrow will be just like yesterday, growing less and less prepared for each new disaster as it comes, and more and more desperately invested in a life we can’t sustain. Or we can learn to see each day as the death of what came before, freeing ourselves to deal with whatever problems the present offers without attachment or fear.

If we want to learn to live in the Anthropocene, we must first learn how to die.

Scranton, who is a doctoral candidate in English at Princeton, is here making an innovative argument for the value of the humanities: humanistic learning, or rather the deep reflection historically associated with it, is all the more necessary now as we are forced to grapple with the inevitability of cultural collapse. Much in me resonates with this argument, but I think the way Scranton develops it is deeply problematic.

A good deal of the essay links the coming civilizational collapse with Scranton’s own experiences as a soldier in Iraq. But it seems to me that that is precisely the problem: Scranton assumes that the death of a civilization is effectively the same as the death of a human being, that the two deaths can be readily and straightforwardly analogized. Just as I had to learn to die, so too must our culture. But no.

The problem lies in the necessarily loose, metaphorical character of the claim that a civilization is “dead.” Scranton writes,

Now, when I look into our future — into the Anthropocene — I see water rising up to wash out lower Manhattan. I see food riots, hurricanes, and climate refugees. I see 82nd Airborne soldiers shooting looters. I see grid failure, wrecked harbors, Fukushima waste, and plagues. I see Baghdad. I see the Rockaways. I see a strange, precarious world.

Well, maybe. But maybe the end of our civilization, even should it be as certain as Scranton believes, won’t look like this; maybe it will be a long slow economic and social decline in which massive violence is evaded but slow inexorable rot cannot be. (Scranton is rather too assured in the detail of his prophecies.) But in any case, whatever happens to our civilization will not be “death” in anything like the same sense that a soldier dies on the battlefield. When that soldier dies, his heart stops, his brain circuitry ceases to function, his story in this world is over. But even this catastrophically afflicted culture described by Scranton is still in some sense alive, still functioning, in however compromised a way. And this will be the case as long as human beings remain on the earth: they will have some kind of social order, which will always be in need of healing, restoration, growth in flourishing.

Which means, I think, that the absolutely necessary lessons in how to die that every one of us should learn — because our lives are really no more secure than a soldier’s, though for our peace of mind we pretend otherwise — are not really the ones needed in order to deal with the coming of the Anthropocene. Scranton’s dismissal of practical considerations involving the social and economic order in favor of philosophical reflection might even be a counsel of despair; he does seem, to me at least, to be saying that nothing in the material order can possibly be rescued so the only thing left to do is reconcile ourselves to death. I believe that anthropogenic global warning is happening, and I believe that its consequences for many people will be severe, but I do not accept that nothing meaningful can be done to mitigate those consequences. In short, I do not believe and do not think I am permitted to believe that our civilization is already dead.

But for me and for you, the necessity of facing death remains, and indeed is not any different now and for us than it was in the past or for any of our ancestors. For the individual facing death, the Anthropocene changes nothing. This was the point of C.S. Lewis’s great sermon “Learning in Wartime”:

What does war do to death? It certainly does not make it more frequent; 100 per cent of us die, and the percentage cannot be increased. It puts several deaths earlier; but I hardly suppose that that is what we fear. Certainly when the moment comes, it will make little difference how many years we have behind us. Does it increase our chance of a painful death? I doubt it. As far as I can find out, what we call natural death is usually preceded by suffering; and a battlefield is one of the very few places where one has a reasonable prospect of dying with no pain at all. Does it decrease our chances of dying at peace with God? I cannot believe it. If active service does not persuade a man to prepare for death, what conceivable concatenation of circumstance would? Yet war does do something to death. It forces us to remember it. The only reason why the cancer at sixty or the paralysis at seventy-five do not bother us is that we forget them.

Just as wars must sometimes be fought, so the consequences of the Anthropocene must be confronted. Or so I believe. But whether or not I’m right about that, I know this: Death is coming for us all. And if Montaigne is right that “to philosophize is to learn to die,” then the humanities, in so far as they help us to be genuinely philosophical, are no more relevant in the Anthropocene than they ever have been — nor any less so.

the view from the moral mountaintop

Even at my advanced age, I can still never quite predict what’s going to agitate me. But here’s something that has me rather worked up. In a reflection on Ender’s Game — a story about which I have no opinions — Laura Miller relates this anecdote:

There’s a short story by Tom Godwin, famous in science fiction circles, called “The Cold Equations.” It’s about the pilot of a spaceship carrying medicine to a remote planet. The ship has just enough fuel to arrive at that particular destination, where its cargo will save six lives. En route, the pilot discovers a stowaway, an adolescent girl, and knowing that her additional weight will make completing the trip impossible, the agonized man informs her that she will have to go out the airlock. There is no alternative solution. 

 This story was described to me by a science fiction writer long before I read it, and since it contains lines like “she was of Earth and had not realized that the laws of the space frontier must, of necessity, be as hard and relentless as the environment that gave them birth,” I can’t honestly call it a must. The writer was complaining about some of his colleagues and their notions of their genre’s strengths and weaknesses. “They always point to that story as an example of how science fiction forces people to ask themselves the sort of hard questions that mainstream fiction glosses over,” he said. “That’s what that story is supposed to be about, who would you save, tough moral choices.” He paused, and sighed. “But at a certain point I realized that’s not really what that story is about. It’s really about concocting a scenario where you get a free pass to toss a girl out an airlock.”

If you’d like, take a few moments now and read “The Cold Equations” for yourself. If you’ve done so, then tell me: what in the story constitutes evidence for the claim that Tom Godwin’s story is fundamentally “about concocting a scenario where you get a free pass to toss a girl out an airlock”? Is is the ending, maybe?

… the empty ship still lived for a little while with the presence of the girl who had not known about the forces that killed with neither hatred nor malice. It seemed, almost, that she still sat, small and bewildered and frightened, on the metal box beside him, her words echoing hauntingly clear in the void she had left behind her:

I didn’t do anything to die for… I didn’t do anything…

Does that sound like delight in the death of a child to you?

How casually Miller’s friend attributed to someone he did not know, and with no discernible evidence, sick and twisted fantasies of murdering female children. And how casually Miller relates it and, apparently, endorses it not only as a true description of Tom Godwin but also of (male?) science-fiction fandom in general:

The heart of any work of fiction, and especially of popular fiction, is a knot of dreams and desires that don’t always make sense or add up, which is what my friend meant when he said that “The Cold Equations” is really about the desire to toss a girl out an airlock (with the added bonus of getting to feel sorry for yourself afterward). That inconvenient girl, with her claim to the pilot’s compassion, can be jettisoned as satisfyingly as the messy, mundane emotions the story’s fans would like to see purged from science fiction.

Miller and her friend just look down from their moral heights on Tom Godwin and people who have been moved by his story, and dispense their eviscerating judgments with carefree assurance. I can’t even imagine what it’s like to live at that altitude. I hope I don’t ever find out.

the Fanny Price we’ll never see

I’m not especially excited about the Austen Project:

The Austen Project, with bestselling contemporary authors reworking “the divine Jane” for a modern audience, kicks off later this month with the publication of Joanna Trollope’s new Sense & Sensibility, in which Elinor is a sensible architecture student and impulsive Marianne dreams of art school.

Also promised are versions from Val McDermid (Northanger Abbey), Curtis Sittenfeld (Pride & Prejudice) and – gadzooks – the prolific Alexander McCall Smith, most famous for his Botswanan private eye novels, who has been let loose on Emma (an experience he describes as “like being asked to eat a box of delicious chocolates”).

Interestingly, but unsurprisingly, no one has signed up for what I believe to be Austen’s greatest novel, Mansfield Park. Why am I not surprised? Well, consider a passage from the best essay, by far, ever written about Mansfield Park, in which Tony Tanner writes,

Fanny Price exhibits few of the qualities we usually associate with the traditional hero or heroine. We expect them to have vigour and vitality; Fanny is weak and sickly. We look to them for a certain venturesomeness or audacity, a bravery, a resilience, even a recklessness; but Fanny is timid, silent, unassertive, shrinking and excessively vulnerable. Above all, perhaps, we expect heroes and heroines to be active, rising to opposition, resisting coercion, asserting their own energy; but Fanny is almost totally passive. Indeed, one of the strange aspects of this singular book is that, regarded externally, it is the story of a girl who triumphs by doing nothing. She sits, she waits, she endures; and, when she is finally promoted, through marriage, into an unexpectedly high social position, it seems to be a reward not so much for her vitality as for her extraordinary immobility. This is odd enough; yet there is another unusual and even less sympathetic aspect to this heroine. She is never, ever, wrong. Jane Austen, usually so ironic about her heroines, in this instance vindicates Fanny Price without qualification. we are used to seeing heroes and heroines confused, fallible, error-prone. But Fanny always thinks, feels, speaks and believes exactly as she ought. Every other character in the book, without exception, falls into error — some fall irredeemably. But not Fanny. She does not put a foot wrong. Indeed, she hardly risks any steps at all: as we shall see, there is an intimate and significant connection between her virtue and her immobility. The result of these unusual traits has been to make her a very unpopular heroine.

The pivotal event of the novel is a long scene in which various young people gathered at Mansfield Park, the Bertram country house, decide to stage a play that Fanny believes to be immoral and that she therefore quietly but firmly refuses to act in. When Sir Thomas Bertram unexpectedly returns from Antigua and finds them in the middle of rehearsals, his younger son Edmund meets with him and confesses the general impropriety. But he adds this: “‘We have all been more or less to blame,’ said he, ‘every one of us, excepting Fanny. Fanny is the only one who has judged rightly throughout; who has been consistent. Her feelings have been steadily against it from first to last. She never ceased to think of what was due to you. You will find Fanny everything you could wish.’”

There is absolutely no chance that any novelist now living will even attempt to portray a Fanny Price remotely like the character Austen created. It is impossible to imagine any moral idea more completely alien to the spirit of our time that the notion that someone can exhibit virtue by refraining from participating in the recreations that other people enjoy. Prig! Prude! Narrow-minded bigot!

If anyone ever does sign on to re-write Mansfield Park, one of two things will happen: either Fanny will become a completely different character, one not noted for her “immobility” and resistance to evils small and large, or she will have those traits and will therefore be explicitly portrayed as what Kingsley Amis said the original Fanny was, “a monster of complacency and pride.” There are no other foreseeable options.

tech intellectuals and the military-technological complex

I was looking forward to reading Henry Farrell’s essay on “tech intellectuals”, but after reading it I found myself wishing for a deeper treatment. Still, what’s there is a good start.

The “tech intellectual” is a curious newfangled creature. “Technology intellectuals work in an attention economy,” Farrell writes. “They succeed if they attract enough attention to themselves and their message that they can make a living from it.” This is the best part of Farrell’s essay:

To do well in this economy, you do not have to get tenure or become a contributing editor to The New Republic (although the latter probably doesn’t hurt). You just need, somehow, to get lots of people to pay attention to you. This attention can then be converted into more material currency. At the lower end, this will likely involve nothing more than invitations to interesting conferences and a little consulting money. In the middle reaches, people can get fellowships (often funded by technology companies), research funding, and book contracts. At the higher end, people can snag big book deals and extremely lucrative speaking engagements. These people can make a very good living from writing, public speaking, or some combination of the two. But most of these aspiring pundits are doing their best to scramble up the slope of the statistical distribution, jostling with one another as they fight to ascend, terrified they will slip and fall backwards into the abyss. The long tail is swarmed by multitudes, who have a tiny audience and still tinier chances of real financial reward.

This underlying economy of attention explains much that would otherwise be puzzling. For example, it is the evolutionary imperative that drives the ecology of technology culture conferences and public talks. These events often bring together people who are willing to talk for free and audiences who just might take an interest in them. Hopeful tech pundits compete, sometimes quite desperately, to speak at conferences like PopTech and TEDx even though they don’t get paid a penny for it. Aspirants begin on a modern version of the rubber-chicken circuit, road-testing their message and working their way up.

TED is the apex of this world. You don’t get money for a TED talk, but you can get plenty of attention—enough, in many cases, to launch yourself as a well-paid speaker ($5,000 per engagement and up) on the business conference circuit. While making your way up the hierarchy, you are encouraged to buff the rough patches from your presentation again and again, sanding it down to a beautifully polished surface, which all too often does no more than reflect your audience’s preconceptions back at them.

The last point seems exactly right to me. The big tech businesses have the money to pay those hefty speaking fees, and they are certainly not going to hand out that cash to someone who would like to knock the props right out from under their lucrative enterprise. Thus, while Evgeny Morozov is a notably harsh critic of many other tech intellectuals, his career is also just as dependent as theirs on the maintenance of the current techno-economic order — what, in light of recent revelations about the complicity of the big tech companies with the NSA, we should probably call the military-technological complex.

The only writer Farrell commends in his essay is Tim Slee, and Slee has been making these arguments for some time. In one recent essay, he points out that “the nature of Linux, which famously started as an amateur hobby project, has been changed by the private capital it attracted. . . . Once a challenger to capitalist modes of production, Linux is now an integral part of them.” In another, he notes that big social-media companies like Facebook want to pose as outsiders, as hackers in the old sense of the word, but in point of fact “capitalism has happily absorbed the romantic pose of the free software movement and sold it back to us as social networks.”

You don’t have to be a committed leftist, like Farrell or Slee, to see that the entanglement of the tech sector with both the biggest of big businesses and the powers of vast national governments is in at least some ways problematic, and to wish for a new generation of tech intellectuals capable of articulating those problems and pointing to possible alternative ways of going about our information-technology work. Given the dominant role the American university has long had in the care and feeding of intellectuals, should we look to university-based minds for help? Alas, they seem as attracted by tech-business dollars as anyone else, especially now that VCs are ready to throw money at MOOCs. Where, then, will the necessary voices of critique come from?