Overcoming Bias: Why Not?

In a recent New Atlantis essay, “In Defense of Prejudice, Sort of,” I criticized what I call the new rationalism:

Today there is an intellectual project on the rise that puts a novel spin on the old rationalist ideal. This project takes reason not as a goal but as a subject for study: It aims to examine human rationality empirically and mathematically. Bringing together the tools of economics, statistics, psychology, and cognitive science, it flies under many disciplinary banners: decision theory, moral psychology, behavioral economics, descriptive ethics. The main shared component across these fields is the study of many forms of “cognitive bias,” supposed flaws in our ability to reason. Many of the researchers engaged in this project — Daniel Kahneman, Jonathan Haidt, Joshua Greene, Dan Ariely, and Richard Thaler, to name a few — are also prominent popularizers of science and economics, with a bevy of bestselling books and a corner on the TED talk circuit.

While those scholars are some of the most prominent of the new rationalists, here on Futurisms it’s worth mentioning that many others are also spokesmen of transhumanism. These latter thinkers draw on the same cognitive science research but lean more on statistics and economics. More significantly, they drop the scientific pretense of mere description, claiming not only to study but unabashedly to perfect the practice of rationality.

Their projects have modest names like Overcoming Bias, Less Wrong, and the Center for Applied Rationality (CFAR, pronounced “see far” — get it?). CFAR is run by the Machine Intelligence Research Institute, whose board has included many of the big guns of artificial intelligence and futurism. Among the project’s most prominent members are George Mason University economist and New York Times quote darling Robin Hanson, and self-described genius Eliezer Yudkowsky. With books, blogs, websites, conferences, meetup groups in various cities, $3,900 rationality training workshops, and powerful connections in digital society, they are increasingly considered gurus of rational uplift by Silicon Valley and its intellectual hangers-on.

A colleague of mine suggested that these figures bear a certain similarity to Mr. Spock, and this is fitting on a number of levels, from their goal of bringing all human action under the thumb of logic, to their faith in the relative straightforwardness of this goal — which is taken to be achievable not by disciplines working across many generations but by individual mentation — to the preening but otherwise eerily emotionless tone of their writing. So I’ll refer to them for shorthand as the Vulcans.

The Vulcans are but the latest members of an elaborately extended tradition of anti-traditionalist thought going back at least to the French Enlightenment. This inheritance includes revolutionary ambitions, now far higher than most of their forebears, from the rational restructuring of society in the short term to the abolition of man in the only-slightly-less-short term. And at levels both social and individual, the reformist project is inseparable from the rationalist one: for example, Yudkowsky takes the imperative to have one’s body cryogenically preserved upon death to be virtually axiomatic. He notes that only a thousand or so people have signed up for this service, and comes to the only logical conclusion: this is the maximum number of reliably rational people in the world. One can infer that it will be an elect few deemed fit to command the remaking of the world, or even to understand, when the time arrives to usher in the glorious future, why it need happen at all.

The Vulcans also represent a purified version of the idea that rationality can be usefully studied as a thing in itself, and perfected more or less from scratch. Their writing has the revealing habit of talking about reason as if they are the first to discuss the idea. Take Less Wrong, for example, which rarely acknowledges the existence of any intellectual history prior to late-nineteenth-century mathematics except to signal disgust for the brutish Past, and advertises as a sort of manifesto its “Twelve Virtues of Rationality.”

Among those virtues, “relinquishment” takes spot number two (“That which can be destroyed by the truth should be”), “lightness” spot three (“Be faithless to your cause and betray it to a stronger enemy”), “argument” and “empiricism” are modestly granted spots five and six, and “scholarship” pulls up the rear at number eleven. What about the twelfth virtue? There isn’t one, for the other virtue transcends mere numbering, and “is nameless,” except that its name is “the Way.” Presented as the Path to Pure Reason, the Way is drawn, like much Vulcan writing, from Eastern mysticism, without comment or apology.

Burke vs. Spock

It’s wise not to overstate the influence of Vulcanism, which may well wind up in the dustbin of pseudoscience history, along with fads like the rather more defensible psychoanalysis. The movement is significant mainly for what it reveals. For at its core lie some ingredients of Enlightenment thought with enduring appeal, usefully evaporated of diluting elements, boiled down to a syrupy attitudinal essence covered with a thin argumentative crust. It contains a version of the parable of the Cave, revised to hold the promise of final, dramatic escape; an uneasy marriage of skepticism and self-confidence whose offspring is the aspiration to revolution.

In the book The Place of Prejudice, which I reviewed in the essay linked above, Adam Adatto Sandel notes rationalism’s reactionary counterpart, typically voiced through Edmund Burke, which accepts the conflict between reason and tradition but embraces the other side. Like Sandel, I see this stance as wrongheaded, a license to draw a line around some swath of the human world as forever beyond understanding, and draw it arbitrarily — or worse, around just those things one sees as most in need of intellectual defense. But the conflict cannot be avoided as an epistemological and practical matter, a duel over the reasons for our imperfect understanding, and the best guides for action in light of it.

Looking at the schemes of the Vulcans, it’s hard not to hear Burke’s point about the politically cautious advantages of (philosophical) prejudice in contrast with the dangerous instability of Reason. The link between the aspirations of the French Enlightenment and the outrages of the French Revolution was not incidental, nor are the links of either to today’s hyper-rationalists.

A few years ago, I attended a conference at which James Hughes eagerly cited the Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit, which seems to prefigure transhumanism and depicts a nearer future in which reason has fully liberated us from the brutality of tradition. Hughes mentioned that this work was written when Condorcet was in hiding, but skipped past the irony: as Charles Taylor writes of the Sketch, with a bit of understatement:

it adds to our awe before his unshaken revolutionary faith when we reflect that these crimes were no longer those of an ancien régime, but of the forces who themselves claimed to be building the radiant future.

Condorcet died in prison a few months later.

But it persists as stubbornly as any prejudice, this presumption of the simple cleansing power of reason, this eagerness to unmoor. Whether action might jump ahead of theory, or rationalism decay into rationalization, providing intellectual cover for baser forces — these are problems to which rationalists are exquisitely attuned when it comes to inherited ideas, but show almost no worry when it comes to their own, inherited though their ideas are too. “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own,” counsels one of the Virtues of Rationality, the image well more apt than it’s meant to be.

Robin Hanson on Why We Should “Forget 9/11”

A few days ago, on the tenth anniversary of the September 11th terrorist attack, George Mason University economics professor Robin Hanson, who is influential among transhumanists, wrote a blog post arguing that we should “Forget 9/11.” Why? Well, partly because of cryonics:

In the decade since 9/11 over half a billion people have died worldwide. A great many choices could have delayed such deaths, including personal choices to smoke less or exercise more, and collective choices like allowing more immigration. And cryonics might have saved most of them.Yet, to show solidarity with these three thousand victims, we have pissed away three trillion dollars ($1 billion per victim), and trashed long-standing legal principles. And now we’ll waste a day remembering them, instead of thinking seriously about how to save billions of others. I would rather we just forgot 9/11. Do I sound insensitive? If so, good — 9/11 deaths were less than one part in a hundred thousand of deaths since then, and don’t deserve to be sensed much more than that fraction. If your feelings say otherwise, that just shows how full fricking far your mind has gone.

Hanson’s post may have been “flamebait” — but we should assume that he sincerely means what he has written, and read it as charitably as possible. His concern about matters of public health is admirable (although one wonders how much more public attention could be paid to the importance of exercising and not smoking, and whether paying attention to 9/11 was really a significant blow to those efforts). And many would agree that our government could have better allocated its money to save, lengthen, and improve lives (although one wonders when this is ever not the case, and what is the foolproof way to avoid misallocation).Still, one has to marvel at Hanson’s insistence that there is no meaningful difference between the ways people die. He implies that all deaths are equally tragic — so there is no difference, apparently, between a peaceful death and a violent one, or between a death in old age and one greatly premature. In a weird version of “blaming the victim,” Hanson implies that many of the people who have died since 9/11 are to blame for their own deaths, because they could have made choices like exercising, not smoking, and undergoing cryonic preservation. But of course, people who are murdered never get the chance to make or have these choices matter at all.This is part of the larger point Hanson misses: One certainly can doubt the severity of the threat posed by terrorism, and the wisdom of the U.S. response to it. But the September 11th attack was animated by ideas, and Hanson willfully ignores the implications of those ideas: The lives he would have us forget were lost in an attack against the very liberal order that allows Hanson to share his ideas so freely. It’s hard to imagine transhumanist discourse flourishing under the theocratic tyranny of sharia law. And if the planners of that attack had their way, that liberal order would be extinguished, as would the lives of many who now live under it — which would certainly alter even the calculus admitted by Hanson’s myopic utilitarianism.Thus the true backwardness of Hanson’s argument. While he may think he is making a trenchantly pro-humanist case for how insensitive and outrageous it is that we focus our emotions on some deaths much more than others, one wonders whether dulling our sensitivity to the deaths of the few can really be the best way to make us care about the deaths of the many. If we cannot feel outrage at what is shocking, can we still be moved by what is commonplace? If we do not mourn the loss of those who are close to us, how can we ever mourn the loss of those who are far?

History, 9/11 Relics, and “Technological Superstition”

Isn’t it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the architect put them together.

—Niels Bohr, to Werner Heisenberg, at Kronborg Castle

Kevin Kelly recently declared that most of the value we place on historical artifacts is a matter of mere “technological superstition.” Beginning with artifacts from the September 11 attack sites, and continuing to Ernest Hemingway’s typewriter, home-run baseballs, and the pen used to sign the Declaration of Independence, Kelly claims that we preserve, collect, and pay great sums for these objects because we believe they are akin to religious relics that confer supernatural or magical powers.(flickr/aturkus)Now, I could see Kelly’s point if people were preserving 9/11 rubble because they thought that tossing it over one’s shoulder would ward off evil spirits, or were buying Hemingway’s typewriter because they thought rubbing one’s temples upon it would help one get a story into McSweeney’s. But as far as I know, no one believes, or is saying, any such thing. In fact, Kelly’s own argument suggests something rather different.The main elements of Kelly’s argument seem to be: (1) The supposed “specialness” of an artifact does not reside in the artifact itself, cannot be measured by scientific instrumentation; it is thus superstitious. (2) An artifact’s supposed “specialness originates in the same way as an ancient relic — because someone says so.” This is why people who value artifacts are so interested in provenance — documentation or evidence to establish that the artifact actually has the historical connection it is supposed to. (3) There are only two legitimate, non-superstitious reasons to value particular historical objects: age and rarity. (Kelly makes parts of this last point in the comments section beneath his original post.)

Hemingway’s typewriter and binoculars, at his home in Ketchum, Idaho (US Plan B)

A variety of immediate problems arise. The idea that an artifact’s uniqueness cannot be measured empirically is simply not true in the examples Kelly has provided. His prime example is Hemingway’s typewriter, which is supposedly physically identical to every other typewriter of the same model. Except it isn’t. Hemingway owned it, so, for example, it presumably has bits of his skin cells and hair lodged in it. It is chemically unique: a forensic scientist needing to obtain Hemingway’s DNA might examine this typewriter, but would not examine any other instance of that model.Kelly’s point (2) is trying desperately to eat its own tail — more on that in a moment. And on point (3), age is not a property that resides in an object (even if evidence of it sometimes does) and rarity most certainly does not reside in an object. If a home-run baseball becomes sufficiently old, or other baseballs of the same model are destroyed so that it becomes rare, why can we now value it? Nothing residing in the ball itself has changed.Putting these problems aside for now, it seems that Kelly wants us to value objects only inasmuch as they yield information, in particular scientific information. Scientific theories are interested in universals and types, not particulars and instances. A lab rat is useful because we can manipulate it and perform tests upon it to verify or falsify theories. But the particular rat has no scientific value beyond its membership in a class. This is because science is especially interested in studying repeatable events — events whose existence is, paradoxically, not bound to a particular time or place. It would be superstitious to scientifically value any particular rat, because the future will always yield more rats.The problem is that the reason people value historical artifacts is quite different from the reason they value objects that are useful for forming and validating scientific theories. In both cases, the central task (if not the ultimate goal) involves learning empirical facts about the world. But where scientific facts are repeatable, available for verification by anyone anywhere, a historical event happens only once, and then is gone. (The two qualities that Kelly concedes might make an artifact legitimately valuable — age and rarity — are in fact only valuable in a historical sense; their value seems scientific simply because it can be quantified.)This is the rub of history: we can’t go back and see it again for ourselves, because it already happened. So we tell stories, and we remember. But we worry that we will forget; and we worry that the next generation will not believe us — or that they will believe, but not feel, because it didn’t exist for them as it did for us. Perhaps we worry that, after enough time, even things that happened to us, and people we knew, will begin to seem less real — because even for us they don’t exist now as they once did.

World Trade Center rubble (via Daily Mail)

And so we demand tangible, physical evidence that history actually happened. Ernest Hemingway is just a name on a book; the closest we can come to experiencing and verifying the real existence of the historical person is standing in his study, touching his typewriter. It becomes easy for those of us who were not living in New York or D.C. or Shanksville, and especially for the children too young to remember, to disbelieve the events of 9/11 on some level — to think it really was just a movie that played out on TV. Left, the wedding ring worn by Bryan Jack, a passenger on the plane that crashed into the Pentagon. Right, his wife’s ring. (From a New York Times story on 9/11 relics.)It is easier to believe and feel the weight of it when one sees the hole in the ground, or holds a piece of twisted metal.Kelly notes in a comment that we may value a watch that belonged to our father or a necklace that belonged to our mother because it has some “intangible, spiritual, ineffable quality that would be absent in another unit.” But there is nothing ineffable about it: the watch belonged to our father, the necklace to our mother, while the others did not. These are hard, empirical facts — nothing superstitious or supernatural about them. And the objection that a historical fact does not reside in an object is backwards: the whole point is that it was the object that resided in history.But the curious thing about artifacts is not just that they reside in events, but that they also reside outside of events, becoming altered by them but persisting beyond them. Artifacts are the precipitations of history. They form a bridge between the past and the present in a way that our own transience and finitude cannot. This is why we are interested in artifacts, and especially in their provenance: not because we value authority as proof of history, but just the opposite, so that we can step beyond taking other peoples’ word, and get as close as possible to personal knowledge of history — of events that happened and people that lived, but are forever gone.

The enduring is something which must be accounted for. One cannot simply shrug it off.

—Walker Percy

At Ground Zero in New York now stands the National September 11 Memorial, built around the footprints of the Twin Towers. If we are to take Kelly’s argument seriously, then the design, even existence of this memorial is a travesty, a voodoo incantation to nothing. Why does it preserve the footprints of the towers — the space around objects that do not exist, in which nothing now resides because they reside in nothing? Why, indeed, is the memorial located at Ground Zero — which is not especially old, and surely cannot, especially now that the memorial is built over it, yield much new empirical information? Why is it built where the events actually happened and not in some other part of Manhattan — or, for that matter, in Trenton or Boise or São Paulo? Why do we remember at all?Beware what is afoot when someone comes crying that he has shined the brightest of lights on human affairs, and found that he cannot see in it something everyone else does. There is a good chance he has simply blinded himself.

The footprint of one of the World Trade Center buildings (Mary Altaffer/AP, via The New York Times)

There Is No ‘Undo’ Button for the Singularity

As a matter of clearing up the record, I’d like to point out a recent post by Michael Anissimov in which he points out that his blog’s server is still infested with malware. The post concludes:

I don’t know jack about viruses or how they come about. I suppose The New Atlantis will next be using that as evidence that a Singularity will never happen. Oh wait — they already did.

[UPDATE: Mr. Anissimov edited the post without noting it several times, including removing this snarky comment, and apparently, within the last hour or two, deleting the post entirely; see below.]Mr. Anissimov is referring to two posts of mine, “Transhumanist Tech Failures” and “The Disinformation Campaign of Transhumanist ‘Caution’.” But even a passing glance at either of these posts will show that I never used this incident as evidence that the Singularity will never happen. Instead, it should be clear that I used it, rather opportunistically, to point out the embarrassing fact that the hacking of his site ironically reveals the deep foolhardiness of Mr. Anissimov’s aspirations. Shameless, I know.It’s not of mere passing significance that Mr. Anissimov admits here that he “[doesn’t] know jack about viruses or how they come about”! You would think someone who is trying to make his name on being the “responsible” transhumanist, the one who shows up the need to make sure AI is “friendly” instead of “unfriendly,” would realize that, if ever there comes into existence such a thing as unfriendly AI — particularly AI intentionally designed to be malicious — computer viruses will have been its primordial ancestor, or at least its forerunner. Also, you would think he would be not just interested in but actually in possession of a deep and growing knowledge of the practical aspects of artificial intelligence and computer security, those subjects whose mastery are meant to be so vital to our future.I know we Futurisms guys are supposedly Luddites, but (although I prefer to avoid trotting this out) I did in fact graduate from a reputable academic computer science program, and in it studied AI, computer security, and software verification. Anyone who properly understands even the basics of the technical side of these subjects would laugh at the notion of creating highly complex software that is guaranteed to behave in any particular way, particularly a way as sophisticated as being “friendly.” This is why we haven’t figured out how to definitively eradicate incomparably more simple problems — like, for example, ridding malware from servers running simple blogs.The thing is, it’s perfectly fine for Mr. Anissimov or anyone else who is excited by technology not to really know how the technology works. The problem comes in their utter lack of humility — their total failure to recognize that, when one begins to tackle immensely complex “engineering problems” like the human mind, the human body, or the Earth’s biosphere, little errors and tweaks in the mind, gaps in your knowledge that you weren’t even aware of, can translate into chaos and catastrophe when they are actually applied. Reversing an ill-advised alteration to the atmosphere or the human body or anything else isn’t as easy as deleting content from a blog. It’s true that Mr. Anissimov regularly points out the need to act with caution, but that makes it all the more reprehensible that he seems so totally disinclined to actually so act.—Speaking of deleting content from a blog: there was for a while a comment on Mr. Anissimov’s post critical of his swipe at us, and supportive of our approach if not our ideas. But he deleted it (as well as another comment referring to it). He later deleted his own jab at our blog. And sometime in the last hour or two, he deleted the post entirely. All of these changes were done without making any note of them, as if he hopes his bad ideas can just slide down the memory hole.We can only assume that he has seen the error of his ways, and now wants to elevate the debate and stick to fair characterizations of the things we are saying. That’s welcome news, if it’s true. But, to put it mildly, silent censorship is a fraught way to conduct debate. So, for the sake of posterity, we have preserved his post here exactly as it appeared before the changes and its eventual deletion. (You can verify this version for yourself in Yahoo’s cache until it updates.)—A final point of clarification: We here on Futurisms are actually divided on the question of whether the Singularity will happen. I think it’s fair to say that Adam finds many of the broad predictions of transhumanism basically implausible, while Charlie finds many and I find a lot of them at least theoretically possible in some form or another.But one thing we all agree on is that the Singularity is not inevitable — that, in the words of the late computer science professor and artificial intelligence pioneer Joseph Weizenbaum, “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.”Rather, the future is always a matter of human choices; and the point of this blog is that we think the possibility of humans choosing to bring about the Singularity would be a pretty bad one. Why? We’ve discussed that at some length, and we will go on doing so. But a central reason has to be practical: if we can’t keep malware off of a blog, how can we possibly expect to be able to maintain the control we want when our minds, and every aspect of our society, is so subject to the illusion of technical mastery?With that in mind, we have much, much more planned to say in the days, weeks, and months ahead, and we look forward to getting back to a schedule of more frequent posting now that we’re clearing a few major deadlines off our plates.

Revolution! — Within Reason

What a difference a day makes! On Tuesday, Michael Anissimov posted a plea to his readers to aid the Existential Risk Reduction Career Network — either by “[joining] an elite group of far-sighted individuals by contributing at least 5% of your income” or, “for those who wish to make their lives actually mean something,” by finding a job through the network. Who’d have thought you could make your life mean something by becoming an existentialist?At any rate, he took something a beating in the comments (“Harold Camping called, he wants his crazy back,” said one), but I think people might as well put their money where their mouths are. That’s how interest-group politics works in American liberal democracy; it’s part of the give and take of public debate and the way in which decisions get made. Why existential risk reduction would not include a healthy dose of criticism of transhumanism is another matter, but I was happy to see Mr. Anissimov seeming to be sensible with respect to one of the routes for how the transhumanist cause is going to have to get ahead in the public arena.Just shows how wrong a guy can be. On Wednesday, Mr. Anissimov published a brief critique of a rather thoughtful essay by Charles Stross, one of the great writers of Singularity-themed science fiction. Mr. Stross expresses some skepticism about the possibility of the Singularity, but Mr. Anissimov would have none of it, particularly when Mr. Stross dares to suggest that there might be reasons to heavily regulate AI research. Mr. Anissimov thunders:

We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

(Now I understand why Bond movie villains end up somewhere in mid-ocean.) He continues:

WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it. [Emphases in original.]

Take that, Charles Stross: just you try to stop us!! Mr. Anissimov makes the Singularity look a lot like Marx’s communism. We don’t know quite what it’s going to look like, but we know we have to get there. And we will do anything “within reason” to get there. Of course, what defines the parameters of “within reason” is the alleged necessity of reaching the goal; as the Communists found out, under this assumption “within reason” quickly comes to signify “by any means necessary.” Welcome to the logic of crusading totalitarianism.

The Transhuman and the Postmodern (A Further Response to James Hughes)

My previous post on transhumanism and morality elicited a response from James Hughes, whose recent series of essays was my prompt. I thank Prof. Hughes for his response, although it seems to me to confirm more than not the main point of my original post.

I’m confident that Prof. Hughes understands that what we are calling for the sake of shorthand “Enlightenment values” did not present themselves as “historically situated” but as simply true. Speaking schematically and as briefly as possible, it took Hegel (no unambiguous fan of the Enlightenment) to historicize them, but he did so in a way that preserved the possibility of truth. It took Nietzsche’s radical historicism in effect to turn Hegel against himself, and in so doing to replace truth with willful, creative overcoming. That opens the door to postmodernism.

It looks like it is almost axiomatic to Prof. Hughes that all “truths” are historically situated and culturally relative, so in that postmodern manner he is rejecting “Enlightenment values” on their own terms. Nietzsche, shall we say, has eaten that cake. But why then “privilege” “Enlightenment values” at all? Prof. Hughes wants to keep the cake around to the extent it is useful to pursue a grand transformational project (a necessary one, according to at least some of his transhumanist brothers and sisters). But why (assuming there is a choice) pursue transhumanism at all as a grand project, or why prefer one version over another? To this question Prof. Hughes’s axiom allows no rational answer (“Reason,” he writes, “is a good tool but … our values and moral codes are not grounded in Reason”) although the silence is covered up by libertarian professions, the superficiality of which Prof. Hughes understands full well.

What Agnes Heller calls “reflective postmodernism” describes a response to the dilemma Prof. Hughes is facing that to my mind is not without problems, but at least seems intellectually respectable. Armed with Nietzsche’s paradoxical truth that there is no truth, the reflective postmodernist is alive to irony, open to being wrong and playful in outlook. But above all, the reflective postmodernist is an observer of the world, having abandoned entirely the modern propensity to pursue the kind of grand, “necessary,” transformational projects that made the twentieth century so terrible. Absent such abnegation, I don’t see how the postmodern-style adherence to “Enlightenment values” Prof. Hughes recommends for transhumanism can be anything more than anti-Enlightenment will to power.

Transhuman Ambitions and the Lesson of Global Warming

Anyone who believes in the science of man-made global warming must admit the important lesson it reveals: humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them. Let’s call this (just for kicks) the Malcolm Principle. Our knowledge is little but our power is great, and so we must wield it with caution. Much of the continued denial of a human cause for global warming — beyond the skepticism merited by science — is due to a refusal to accept the truth of this principle and the responsibility it entails.

Lake Hamoun, 1976-2001,
courtesy UNEP

And yet a similar rejection of the Malcolm Principle is evident even among some of those who accept man’s role in causing global warming. This can be seen in the great overconfidence of climate scientists in their ability to understand and predict the climate. But it is far more evident in the emerging support for “geoengineering” — the notion that not only can we accurately predict the climate, but we can engineer it with sufficient control and precision to reverse warming.

It is unsurprising to find transhumanist support for geoengineering. Some advocates even support geoengineering to increase global warming — for instance, Tim Tyler advocates intentionally warming the planet to produce various allegedly beneficial effects. Here the hubris of rejecting the Malcolm Principle is taken to its logical conclusion: Once we start fiddling with the climate intentionally, why not subject it to the whims of whatever we now think might best suit our purposes? Call it transenvironmentalism.
In fact, name any of the most complex systems you can think of that were not created from the start as engineering projects, and there is likely to be a similar transhumanist argument for making it one. For example:
  • The climate, as noted, and thus implicitly also the environment, ecosystem, etc.
  • The animal kingdom, see e.g. our recent lengthy discussion on ending predation.
  • The human nutritional system, see e.g. Kurzweil.
  • The human body, a definitional tenet for transhumanists.
  • The human mind, similarly.
Transhumanist blogger Michael Anissimov (who earlier argued in favor of reengineering the animal kingdom) initially voiced support for intentional global warming, but later deleted the post. He defended his initial support with reference to Singularitarian Eliezer Yudkowsky’s “virtues of rationality,” particularly that of “lightness,” which Yudkowsky defines as: “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.” Yudkowsky’s list also acknowledges potential limits of rationality implicit in its virtues of “simplicity” and “humility”: “A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere,” and the humble are “Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.” Yet in addition to the “leaf in the wind” virtue, the list also contains “relinquishment”: “Do not flinch from experiences that might destroy your beliefs.”
Putting aside the Gödelian contradiction inherent even in “relinquishment” alone (if one should not hesitate to relinquish one’s beliefs, then one should also not hesitate to relinquish one’s belief in relinquishment), it doesn’t seem that one can coherently exercise all of these virtues at once. We live our lives interacting with systems too complex for us to ever fully comprehend, systems that have come into near-equilibrium as the result of thousands or billions of years of evolution. To take “lightness” and “relinquishment” as guides for action is not simply to be rationally open-minded; rather, it is to choose to reflexively reject the wisdom and stability inherent in that evolution, preferring instead the instability of Yudkowsky’s “leaf in the wind” and the brash belief that what we look at most eagerly now is all there is to see.
Imagine if, in accordance with “lightness” and “relinquishment,” we had undertaken a transhumanist project in the 19th century to reshape human heads based on the fad of phrenology, or a transenvironmentalist project in the 1970s to release massive amounts of carbon dioxide on the hypothesis of global cooling. Such proposals for systemic engineering would have been foolish not merely because of their basis in particular mistaken ideas, but because they would have proceeded on the pretense of comprehensively understanding systems they in fact could barely fathom. The gaps in our understanding mean that mistaken ideas are inevitable. But the inherent opacity of complex systems still eludes those who make similar proposals today: Anissimov, even in acknowledging the global-warming project’s irresponsibility, still cites but a single knowable mechanism of failure (“catastrophic global warming through methane clathrate release”), as if the essential impediment to the plan will be cleared as soon as some antidote to methane clathrate release is devised.
Other transhumanist evaluations of risk similarly focus on what transhumanism is best able to see — namely threats to existence and security, particularly those associated with its own potential creations — which is fine except that this doesn’t make everything else go away. There are numerous “catastrophic errors” wrought already by our failures to act with simplicity and humility — such as our failure to anticipate that technological change might have systemic consequences, as in the climate, environment, and ecosystem; and our tremendous and now clearly exaggerated confidence in rationalist powers exercised directly at the systemic level, as evident in the current financial crisis (see Paul Cella), in food and nutrition (see Michael Pollan and John Schwenkler), and in politics and culture (see Alasdair MacIntyre among many others), just for starters. But among transhumanists there is little serious contemplation of the implications of these errors for their project. (As usual, commenters, please provide me with any counterexamples.)
Perhaps Yudkowsky’s “virtues of rationality” are not themselves to be taken as guides to action. But transhumanism aspires to action — indeed, to revolution. To recognize the consequences of hubris and overreach is not to reject reason in favor of simpleminded tradition or arbitrary givenness, but rather to recognize that there might be purpose and perhaps even unspoken wisdom inherent in existing stable arrangements — and so to acknowledge the danger and instability inherent in the particular hyper-rationalist project to which transhumanists are committed.

Rationalism, risk, and the purpose of politics

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Eliezer Yudkowsky at the 2009 Singularity Summit in New York CityEliezer Yudkowsky, a founder of the Singularity Institute (organizer of this conference), is up next with his talk, “Cognitive Biases and Giant Risks.” (Abstract and bio.)
He starts off by talking about how stupid people are. Or, more specifically, how irrational they are. Yudkowsky runs through lots of common logical fallacies. He highlights the “Conjunction Fallacy,” where people find a story more plausible when it includes more details, when in fact a story becomes less probable when it has more details. I find this to be a ridiculous example. Plausible does not mean probable; people are just more willing to believe something happened when they are told that there are reasons that it happened, because they understand that effects have causes. That’s very rational. (The Wikipedia explanation, linked above, has a different explanation than Yudkowsky’s that makes a lot more sense.)
Yudkowsky is running through more and more of these examples. (Putting aside the content of his talk for a moment, he comes across as unnecessarily condescending. Something I’ve seen a bit of here — the “yeah, take that!” attitude — but he’s got it much more than anyone else.)
He’s bringing it back now to risk analysis. People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.
This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).
[One of Yudkowsky’s slides.]
Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.
That seems at first to be an important point: We don’t plan on a large scale nearly as well or as rationally as we might. But just off the top of my head, Yudkowsky’s approach raises three problems. First, we do not all agree on what existential threats are; that is what politics and persuasion are for; there is no set of problems that everyone thinks we should spend money on; scientists and technocrats cannot answer these questions for us since they inherently involve values that are beyond the reach of mere rationality. Second, Yudkowsky’s depiction of humans, and of human society, as irrational and stupid is far too simplistic. And third, what’s so wrong with spending money on football? If we spent all our money on forestalling existential threats, we would lose sight of life itself, and of what we live for.
Thus ends his talk. The moderator notes that video of all the talks will be available online after the conference; we’ll post links when they’re up.