Transhuman Ambitions and the Lesson of Global Warming

Anyone who believes in the science of man-made global warming must admit the important lesson it reveals: humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them. Let’s call this (just for kicks) the Malcolm Principle. Our knowledge is little but our power is great, and so we must wield it with caution. Much of the continued denial of a human cause for global warming — beyond the skepticism merited by science — is due to a refusal to accept the truth of this principle and the responsibility it entails.


Lake Hamoun, 1976-2001,
courtesy UNEP

And yet a similar rejection of the Malcolm Principle is evident even among some of those who accept man’s role in causing global warming. This can be seen in the great overconfidence of climate scientists in their ability to understand and predict the climate. But it is far more evident in the emerging support for “geoengineering” — the notion that not only can we accurately predict the climate, but we can engineer it with sufficient control and precision to reverse warming.

It is unsurprising to find transhumanist support for geoengineering. Some advocates even support geoengineering to increase global warming — for instance, Tim Tyler advocates intentionally warming the planet to produce various allegedly beneficial effects. Here the hubris of rejecting the Malcolm Principle is taken to its logical conclusion: Once we start fiddling with the climate intentionally, why not subject it to the whims of whatever we now think might best suit our purposes? Call it transenvironmentalism.
In fact, name any of the most complex systems you can think of that were not created from the start as engineering projects, and there is likely to be a similar transhumanist argument for making it one. For example:
  • The climate, as noted, and thus implicitly also the environment, ecosystem, etc.
  • The animal kingdom, see e.g. our recent lengthy discussion on ending predation.
  • The human nutritional system, see e.g. Kurzweil.
  • The human body, a definitional tenet for transhumanists.
  • The human mind, similarly.
Transhumanist blogger Michael Anissimov (who earlier argued in favor of reengineering the animal kingdom) initially voiced support for intentional global warming, but later deleted the post. He defended his initial support with reference to Singularitarian Eliezer Yudkowsky’s “virtues of rationality,” particularly that of “lightness,” which Yudkowsky defines as: “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.” Yudkowsky’s list also acknowledges potential limits of rationality implicit in its virtues of “simplicity” and “humility”: “A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere,” and the humble are “Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.” Yet in addition to the “leaf in the wind” virtue, the list also contains “relinquishment”: “Do not flinch from experiences that might destroy your beliefs.”
Putting aside the Gödelian contradiction inherent even in “relinquishment” alone (if one should not hesitate to relinquish one’s beliefs, then one should also not hesitate to relinquish one’s belief in relinquishment), it doesn’t seem that one can coherently exercise all of these virtues at once. We live our lives interacting with systems too complex for us to ever fully comprehend, systems that have come into near-equilibrium as the result of thousands or billions of years of evolution. To take “lightness” and “relinquishment” as guides for action is not simply to be rationally open-minded; rather, it is to choose to reflexively reject the wisdom and stability inherent in that evolution, preferring instead the instability of Yudkowsky’s “leaf in the wind” and the brash belief that what we look at most eagerly now is all there is to see.
Imagine if, in accordance with “lightness” and “relinquishment,” we had undertaken a transhumanist project in the 19th century to reshape human heads based on the fad of phrenology, or a transenvironmentalist project in the 1970s to release massive amounts of carbon dioxide on the hypothesis of global cooling. Such proposals for systemic engineering would have been foolish not merely because of their basis in particular mistaken ideas, but because they would have proceeded on the pretense of comprehensively understanding systems they in fact could barely fathom. The gaps in our understanding mean that mistaken ideas are inevitable. But the inherent opacity of complex systems still eludes those who make similar proposals today: Anissimov, even in acknowledging the global-warming project’s irresponsibility, still cites but a single knowable mechanism of failure (“catastrophic global warming through methane clathrate release”), as if the essential impediment to the plan will be cleared as soon as some antidote to methane clathrate release is devised.
Other transhumanist evaluations of risk similarly focus on what transhumanism is best able to see — namely threats to existence and security, particularly those associated with its own potential creations — which is fine except that this doesn’t make everything else go away. There are numerous “catastrophic errors” wrought already by our failures to act with simplicity and humility — such as our failure to anticipate that technological change might have systemic consequences, as in the climate, environment, and ecosystem; and our tremendous and now clearly exaggerated confidence in rationalist powers exercised directly at the systemic level, as evident in the current financial crisis (see Paul Cella), in food and nutrition (see Michael Pollan and John Schwenkler), and in politics and culture (see Alasdair MacIntyre among many others), just for starters. But among transhumanists there is little serious contemplation of the implications of these errors for their project. (As usual, commenters, please provide me with any counterexamples.)
Perhaps Yudkowsky’s “virtues of rationality” are not themselves to be taken as guides to action. But transhumanism aspires to action — indeed, to revolution. To recognize the consequences of hubris and overreach is not to reject reason in favor of simpleminded tradition or arbitrary givenness, but rather to recognize that there might be purpose and perhaps even unspoken wisdom inherent in existing stable arrangements — and so to acknowledge the danger and instability inherent in the particular hyper-rationalist project to which transhumanists are committed.

Bad Humbug, Good Humbug, and Bah Humbug

Blogger Michael Anissimov does not believe in Santa Claus, but he does believe in the possibility, indeed the moral necessity, of overcoming animal predation. To put it another way, he does not believe in telling fantasy stories to children if they will take those stories to be true, but he has no compunctions about telling them to adults with hopes that they will be true.

An obvious difference Mr. Anissimov might wish to point out is that adults are more likely than children to be able to distinguish fantasy from reality. He can (and does) submit his thoughts to their critical appraisal. While that difference does not justify what Mr. Anissimov regards as taking advantage of children by telling them convincing fantasies, it does suggest something about the difference between small children and adults. Small children cannot readily distinguish between fantasy and reality. In fact, there is a great deal of pleasure to be had in the failure to make that distinction. It could even be true that not making it is an important prelude to the subsequent ability to make it. Perhaps those who are fed from an early age on a steady diet of the prosaic will have more trouble distinguishing between the world as it is and as they might wish it to be. But here I speculate.

In any case, surely if one fed small children on a steady diet of stories like the one Mr. Anissimov tells about overcoming predation, they might come to believe such stories as uncritically as other children believe in Santa Claus. I can easily imagine their disappointment upon learning the truth about the immediate prospects of lions lying down with lambs. We’d have to be sure to explain to them very carefully and honestly that such a thing will only happen in a future, more or less distant, that they may or may not live to see — even if small children are not all that good at understanding about long-term futures and mortality.

But in light of their sad little faces it would be a hard parent indeed who would not go on to assure them that a fellow named Aubrey de Grey is working very hard to make sure that they will live very long lives indeed so that maybe they will see an end to animal predation after all! But because “treating them as persons” (in Mr. Anissimov’s phrase) means never telling children stories about things that don’t exist without being very clear that these things don’t exist, it probably wouldn’t mean much to them if we pointed out that Mr. de Grey looks somewhat like an ectomorphic version of a certain jolly (and immortal) elf:

The Mainstreaming of Transhumanism

Congratulations to Nick Bostrom, Jamais Cascio, and Ray Kurzweil for being recognized as three of Foreign Policy magazine’s “Top 100 Global Thinkers.” Once upon a time this kind of notoriety might not have helped the reputation of an Oxford don in the Senior Common Room (do such places still exist?), but even were that still the case, it must be tremendously satisfying for Professor Bostrom qua movement builder to get such recognition. The mainstreaming of transhumanism, noted (albeit playfully) by Michael Anissimov, proceeds apace. Ray Kurzweil did not win The Economist’s Innovation Award for Computing and Telecommunications because of his transhumanist advocacy, but apparently nobody at The Economist thought that it would in any way embarrass them. He’s just another one of those global thinkers we admire so much.

Of course such news is also good for the critic. I first alluded to transhumanist anti-humanism in a book I published in 1994, so for some time now I’ve been dealing with the giggle and yuck factors that the transhumanist/extropian/Singularitarian visions of the future still provoke among the non-cognoscenti. Colleagues, friends and family alike don’t quite get why anybody would be seriously interested in that. I’ve tried to explain why I think these kinds of arguments are only going to grow in importance, but now I have some evidence that they are in fact growing.

The Emperor Has No ClothesWhich leads me to Mr. Anissimov’s question about what it is that I’m hoping to achieve. My purpose (and here I only speak for myself) is not to predict, develop, or advocate the specific public policies that will be appropriate to our growing powers over ourselves. In American liberal democracy, the success or failure of such specific measures is highly contingent under the best of circumstances, and my firm belief that on the whole people are bad at anticipating the forces that mold the future means that I don’t think we are operating under the best of circumstances. So my intention is in some ways more modest and in some ways less. Futurisms is so congenial to me because I share its desire to create a debate that will call into question some of the things that transhumanists regard as obvious, or at least would like others to regard as obvious. I’ve made it reasonably clear that I think transhumanism raises many deep questions without itself going very deeply into them, however technical its internal discussions might sometimes get. That’s the modest part of my intention. The less modest part is a hope that exposing these flaws will contribute to creating a climate of opinion where the transhumanist future is not regarded as self-evidently desirable even if science and technology develops in such a way as to make it ever more plausible. So if and when it comes time to make policies, I want there to be skeptical and critical ideas available to counterbalance transhumanist advocacy.

In short, I’m happy to be among those who are pointing out that the emperor has no clothes, even if, to those who don’t follow such matters closely, I might look like the boy who cried wolf.

Looking for a Serious Debate

Over on his blog Accelerating Future, Michael Anissimov has a few criticisms of our blog. Or at least, a blog sharing our blog’s name; he gets so many things wrong that it seems almost as though he’s describing some other blog. And Mr. Anissimov’s comments beneath his own post range from ill-informed and ill-reasoned to ill-mannered and practically illiterate. They are beneath response — except to note that Mr. Anissimov should know better. But putting aside those comments and the elementary errors that were likely the result of his general carelessness in argument — like misattributing to Charlie something that I wrote — some of the broader strokes of Mr. Anissimov’s ignorant and crude post deserve notice.
First, Mr. Anissimov’s post is intellectually lazy. To label an argument “religious” or “irreligious” does not amount to a refutation. Nor can you refute an argument by claiming to expose the belief structures that undergird it.
Second, Mr. Anissimov’s post is intellectually dishonest. He approvingly quotes an article that claims that “all prominent anti-transhumanists — [Francis] Fukuyama, [Leon] Kass, [and Bill] McKibben — are religious.” But anyone who has read those three thinkers’ books and essays will know that they make only publicly-accessible arguments that do not rely upon or even invoke religion. And more to the point, it is an indisputable matter of public fact that none of us here at Futurisms has made the arguments that Mr. Anissimov is imputing to us. None of us has ever argued that we object to transhumanism because “through suffering [we] will enter paradise after [we] are dead.” Not even close.
Once Mr. Anissimov has (falsely) established that those of us who disagree with him do so for religious reasons, he claims that we “want the same damn thing” that he wants. Except that while he wants to achieve immortality through science, his critics “think they can get it through magic.”
To the contrary, our arguments have in fact been humanistic and what you might call earthly — hardly magical thinking or appeals to paradise. The very distinction between humanists and transhumanists should make plain whose beliefs are grounded in earthly affairs and whose instead depend on appeals to fantasy. We are skeptical of transhumanist promises of paradise because their arguments are, by and large, based on faith and fantasy instead of reason and fact; because what they hope to deliver would likely be something quite other than paradise if it became reality; and because the promise of paradise can be used to justify things that ought not be tolerated.
It is too much to ask for Mr. Anissimov to be a charitable reader of our arguments, but if he wants to be taken seriously he should make an effort to seem capable of at least comprehending them. Until he does, it is a peculiar irony that a transhumanist would invoke religion in order to avoid engaging in a substantive debate with his critics.

Are psychologists humans too?

Via Mind Hacks, psychologist Norbert Schwartz gives a revealing answer when asked what nagging things he still doesn’t understand about himself:

I don’t understand … why I’m still fooled by incidental feelings. Some 25 years ago Jerry Clore and I studied how gloomy weather makes one’s whole life look bad – unless one becomes aware of the weather and attributes one’s gloomy mood to the gloomy sky, which eliminates the influence. You’d think I learned that lesson and now know how to deal with gloomy skies. I don’t, they still get me.

Schwartz claims that the tendency he describes can be counteracted, even though his own experience suggests otherwise. It’s fascinating to hear him ask, in essence, “Why is it that my awareness of facts about human psychology does not automatically exempt me from those facts?” Or, in other words, “Why must I be bound to behave humanly simply because I am human?”
This attitude — not uncommon among behavioral scientists — is extended to its logical end in the tenets of transhumanism. Consider Michael Anissimov’s notion that “It is a physical fact about our brains that the connections between stimuli and pleasure/displeasure are arbitrary and exist mostly for evolutionary reasons…. [W]e will eventually modify them if we wish, because the mind is not magical, it’s ‘just’ a machine.”
If the psychological fact that gloomy weather makes for gloomy moods is meaningless and ought to be nonbinding, why not make it so that gloomy weather makes for cheery moods? Why, after all, shouldn’t we reprogram ourselves so that gloomy weather makes us feel like we’re eating ice cream, having sex, or riding across moonbeams on a unicorn fed by marshmallows? (One wonders how then a description of weather as ‘gloomy’ could retain any communicable meaning — how, indeed, a word like ‘gloomy‘ could remain intelligible at all — but no matter, for of course these too are but disposable artifacts.)