Overcoming Bias: Why Not?

In a recent New Atlantis essay, “In Defense of Prejudice, Sort of,” I criticized what I call the new rationalism:

Today there is an intellectual project on the rise that puts a novel spin on the old rationalist ideal. This project takes reason not as a goal but as a subject for study: It aims to examine human rationality empirically and mathematically. Bringing together the tools of economics, statistics, psychology, and cognitive science, it flies under many disciplinary banners: decision theory, moral psychology, behavioral economics, descriptive ethics. The main shared component across these fields is the study of many forms of “cognitive bias,” supposed flaws in our ability to reason. Many of the researchers engaged in this project — Daniel Kahneman, Jonathan Haidt, Joshua Greene, Dan Ariely, and Richard Thaler, to name a few — are also prominent popularizers of science and economics, with a bevy of bestselling books and a corner on the TED talk circuit.

While those scholars are some of the most prominent of the new rationalists, here on Futurisms it’s worth mentioning that many others are also spokesmen of transhumanism. These latter thinkers draw on the same cognitive science research but lean more on statistics and economics. More significantly, they drop the scientific pretense of mere description, claiming not only to study but unabashedly to perfect the practice of rationality.

Their projects have modest names like Overcoming Bias, Less Wrong, and the Center for Applied Rationality (CFAR, pronounced “see far” — get it?). CFAR is run by the Machine Intelligence Research Institute, whose board has included many of the big guns of artificial intelligence and futurism. Among the project’s most prominent members are George Mason University economist and New York Times quote darling Robin Hanson, and self-described genius Eliezer Yudkowsky. With books, blogs, websites, conferences, meetup groups in various cities, $3,900 rationality training workshops, and powerful connections in digital society, they are increasingly considered gurus of rational uplift by Silicon Valley and its intellectual hangers-on.

A colleague of mine suggested that these figures bear a certain similarity to Mr. Spock, and this is fitting on a number of levels, from their goal of bringing all human action under the thumb of logic, to their faith in the relative straightforwardness of this goal — which is taken to be achievable not by disciplines working across many generations but by individual mentation — to the preening but otherwise eerily emotionless tone of their writing. So I’ll refer to them for shorthand as the Vulcans.

The Vulcans are but the latest members of an elaborately extended tradition of anti-traditionalist thought going back at least to the French Enlightenment. This inheritance includes revolutionary ambitions, now far higher than most of their forebears, from the rational restructuring of society in the short term to the abolition of man in the only-slightly-less-short term. And at levels both social and individual, the reformist project is inseparable from the rationalist one: for example, Yudkowsky takes the imperative to have one’s body cryogenically preserved upon death to be virtually axiomatic. He notes that only a thousand or so people have signed up for this service, and comes to the only logical conclusion: this is the maximum number of reliably rational people in the world. One can infer that it will be an elect few deemed fit to command the remaking of the world, or even to understand, when the time arrives to usher in the glorious future, why it need happen at all.

The Vulcans also represent a purified version of the idea that rationality can be usefully studied as a thing in itself, and perfected more or less from scratch. Their writing has the revealing habit of talking about reason as if they are the first to discuss the idea. Take Less Wrong, for example, which rarely acknowledges the existence of any intellectual history prior to late-nineteenth-century mathematics except to signal disgust for the brutish Past, and advertises as a sort of manifesto its “Twelve Virtues of Rationality.”

Among those virtues, “relinquishment” takes spot number two (“That which can be destroyed by the truth should be”), “lightness” spot three (“Be faithless to your cause and betray it to a stronger enemy”), “argument” and “empiricism” are modestly granted spots five and six, and “scholarship” pulls up the rear at number eleven. What about the twelfth virtue? There isn’t one, for the other virtue transcends mere numbering, and “is nameless,” except that its name is “the Way.” Presented as the Path to Pure Reason, the Way is drawn, like much Vulcan writing, from Eastern mysticism, without comment or apology.

Burke vs. Spock

It’s wise not to overstate the influence of Vulcanism, which may well wind up in the dustbin of pseudoscience history, along with fads like the rather more defensible psychoanalysis. The movement is significant mainly for what it reveals. For at its core lie some ingredients of Enlightenment thought with enduring appeal, usefully evaporated of diluting elements, boiled down to a syrupy attitudinal essence covered with a thin argumentative crust. It contains a version of the parable of the Cave, revised to hold the promise of final, dramatic escape; an uneasy marriage of skepticism and self-confidence whose offspring is the aspiration to revolution.

In the book The Place of Prejudice, which I reviewed in the essay linked above, Adam Adatto Sandel notes rationalism’s reactionary counterpart, typically voiced through Edmund Burke, which accepts the conflict between reason and tradition but embraces the other side. Like Sandel, I see this stance as wrongheaded, a license to draw a line around some swath of the human world as forever beyond understanding, and draw it arbitrarily — or worse, around just those things one sees as most in need of intellectual defense. But the conflict cannot be avoided as an epistemological and practical matter, a duel over the reasons for our imperfect understanding, and the best guides for action in light of it.

Looking at the schemes of the Vulcans, it’s hard not to hear Burke’s point about the politically cautious advantages of (philosophical) prejudice in contrast with the dangerous instability of Reason. The link between the aspirations of the French Enlightenment and the outrages of the French Revolution was not incidental, nor are the links of either to today’s hyper-rationalists.

A few years ago, I attended a conference at which James Hughes eagerly cited the Marquis de Condorcet’s Sketch for a Historical Picture of the Progress of the Human Spirit, which seems to prefigure transhumanism and depicts a nearer future in which reason has fully liberated us from the brutality of tradition. Hughes mentioned that this work was written when Condorcet was in hiding, but skipped past the irony: as Charles Taylor writes of the Sketch, with a bit of understatement:

it adds to our awe before his unshaken revolutionary faith when we reflect that these crimes were no longer those of an ancien régime, but of the forces who themselves claimed to be building the radiant future.

Condorcet died in prison a few months later.

But it persists as stubbornly as any prejudice, this presumption of the simple cleansing power of reason, this eagerness to unmoor. Whether action might jump ahead of theory, or rationalism decay into rationalization, providing intellectual cover for baser forces — these are problems to which rationalists are exquisitely attuned when it comes to inherited ideas, but show almost no worry when it comes to their own, inherited though their ideas are too. “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own,” counsels one of the Virtues of Rationality, the image well more apt than it’s meant to be.

The Problem with “Friendly” Artificial Intelligence

Readers of this blog may be familiar with the concept of “Friendly AI” — the project of making sure that artificial intelligences will do what we say without harming us (or, at the least, that they will not rise up and kill us all). In a recent issue of The New Atlantis, the authors of this blog have explored this idea at some length.First, Charles T. Rubin, in his essay “Machine Morality and Human Responsibility,” uses Karel Čapek’s 1921 play R.U.R. — which introduced the word “robot” — to explore the different things people mean when they describe “Friendly AI,” and the conflicting motivations people have for wanting to create it. And he shows why it is that the play actually evinces a much deeper understanding of the meaning and stakes of engineering morality than can be found in the work of today’s Friendly AI researchers:

By design, the moral machine is a safe slave, doing what we want to have done and would rather not do for ourselves. Mastery over slaves is notoriously bad for the moral character of the masters, but all the worse, one might think, when their mastery becomes increasingly nominal…. The robot rebellion in the play just makes obvious what would have been true about the hierarchy between men and robots even if the design for robots had worked out exactly as their creators had hoped. The possibility that we are developing our “new robot overlords” is a joke with an edge to it precisely to the extent that there is unease about the question of what will be left for humans to do as we make it possible for ourselves to do less and less.

Professor Rubin’s essay also probes and challenges the work of contemporary machine-morality writers Wendell Wallach and Colin Allen, as well as Eliezer Yudkowsky.In “The Problem with ‘Friendly’ Artificial Intelligence,” a response to Professor Rubin’s essay, Adam Keiper and I further explore the motivations behind creating Friendly AI. We also delve into Mr. Yudkowsky’s specific proposal for how we are supposed to create Friendly AI, and we argue that a being that is sentient and autonomous but guaranteed to act “friendly” is a technical impossibility:

To state the problem in terms that Friendly AI researchers might concede, a utilitarian calculus is all well and good, but only when one has not only great powers of prediction about the likelihood of myriad possible outcomes, but certainty and consensus on how one values the different outcomes. Yet it is precisely the debate over just what those valuations should be that is the stuff of moral inquiry. And this is even more the case when all of the possible outcomes in a situation are bad, or when several are good but cannot all be had at once. Simply picking certain outcomes — like pain, death, bodily alteration, and violation of personal environment — and asserting them as absolute moral wrongs does nothing to resolve the difficulty of ethical dilemmas in which they are pitted against each other (as, fully understood, they usually are). Friendly AI theorists seem to believe that they have found a way to bypass all of the difficult questions of philosophy and ethics, but in fact they have just closed their eyes to them.

These are just short extracts from long essays with multi-pronged arguments — we might run longer excerpts here on Futurisms at some point, and as always, we welcome your feedback.

The Disinformation Campaign of Transhumanist “Caution”

In my last post on ironic transhumanist tech failures, there was one great example I forgot to mention. If you subscribe to the RSS feed for the IEET blog, you may have noticed that most of their posts go up on the feed multiple times: my best guess is that, due to careless coding in their system (or a bad design idea that was never corrected), a post goes up as new on the feed every time it’s even modified. For example, here’s what the feed’s list of posts from early March looks like:
Ouch — kind of embarrassing. Every project has technical difficulties, of course, but — well, here’s another example:
Question: can we develop and test machine minds and uploads ethically? Well, one way to get at that question is to ask what it might say about technical fallibility when such a prominent transhumanist advocacy organization has not yet figured out how to eliminate inadvertent duplicates on its RSS feed, and how such an error might play out when, say, uploading a mind, where the technical challenges are a bit more substantial, and the consequences of accidentally creating copies a bit more tricky.
Don’t get me wrong — we all know that the IEET is all about being Very Serious and Handling the Future Responsibly. I mean, look, they’re taking the proper precaution of thinking through the ethics of mind uploading long before that’s even possible! Let’s have a look at that post:

Sometimes people complain that they “did not ask to be born.” Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical. The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee.

Probably the most famous example of the “complaint” Ms. Rothblatt alludes to comes from Kurt Vonnegut’s final novel, Timequake, in which he depicts Hitler uttering the words, “I never asked to be born in the first place,” before shooting himself in the head. It doesn’t seem that either fictional-Hitler’s or real-Vonnegut’s complaint was answered satisfactorily by their parents’ “implicit biological consent” to their existence. And somehow it’s hard to imagine that either man would have been satisfied if an ethics committee had rendered the judgment instead.
Could Vonnegut (through Hitler) be showing us something too dark to see by looking directly in its face? Might these be questions for which we are rightly unable to offer easy answers? Is it possible that those crutches of liberal bioethics, autonomy and consent, are woefully inadequate to bear the weight of such fundamental questions? (Might it be absurd, for example, to think that one can write a loophole to the “temporal illogicality” of consenting to one’s own existence by forming a committee?) Apparently not: Rothblatt concludes that “I think practically speaking the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution” and “Ultimately … the seeming catch-22 of how does a consciousness consent to its own creation can be solved.” Problem solved!
—-
In a similar vein, in response to my shameless opportunism in my last post in pointing out the pesky ways that technical reality undermines technological fantasy, Michael Anissimov commented:

In my writings, I always stress that technology fails, and that there are great risks ahead as a result of that. Only transhumanism calls attention to the riskiest technologies whose failure could even mean our extinction.

True enough. Of course, only transhumanism so gleefully advocates the technologies that could mean our extinction in the first place… but it’s cool: after his site got infested by malware for a few days, Anissimov got Very Serious, decided to Think About the Future Responsibly, and, in a post called “Security is Paramount,” figured things out:

For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex…. This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation…. Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn’t go too well for the weak — just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.

Hey, did you know nature is bad and people can be pretty bad too? Getting your blog knocked offline for a few days can inspire some pretty cosmic navel-gazing. (As for the last part, though, it shouldn’t be a worry, as Hitler+ and Stalin+ will have had ethics committees who consented to their existences, and all their existential issues thereby solved.)
—-
The funny thing about apocalyptic warnings like Anissimov’s is that they don’t seem to do a mite to slow down transhumanists’ enthusiasm for new technologies. Notably, despite his Serious warnings, Anissimov doesn’t even consider the possibility that the whole project might be ill-begotten. In fact, despite implicitly setting himself outside and above them, Anissimov is really one of the transhumanists he describes in the same post, who “see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly.” This is because, for all his lofty rhetoric of caution, he is still fundamentally credulous when it comes to the promise of transformative new technologies.
Take geoengineering: in Anissimov’s first post on the subject, he cheered the idea of intentionally warming the globe for certain ostensible benefits. Shortly later, he deleted the post “because of substantial uncertainty on the transaction costs and the possibility of catastrophic global warming through methane clathrate release.” It took someone pointing to a specific, known vector of possible disaster for him to reconsider; otherwise, only a few minutes’ thought given to what would be the most massive engineering project in human history was sufficient to declare it just dandy.
Of course, in real life, unlike in blogging, you can’t just delete your mistakes — say, releasing huge amounts of chemicals into the atmosphere that turn out to be harmful (as we’re learning today when it comes to carbon emissions). Nor did it occur to Anissimov that the one area on which he will readily admit concern about the potential downsides of future technologies — security — might also be an issue when it comes to granting the power to intentionally alter the earth’s climate to whoever has the means (whether they’re “friendly” or not).
—-
One could go on at great length about the unanticipated consequences of transhumanism-friendly technologies, or the unseriousness of most pro-transhumanist ethical inquiries into those technologies. These points are obvious enough.
What is more difficult to see is that Michael Anissimov, Martine Rothblatt, and all of the other writers who proclaim themselves the “serious,” “responsible,” and “precautious” wing of the transhumanist party — including Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil, among others — in fact function as a sort of disinformation campaign on behalf of transhumanists. They toss out facile work that calls itself serious and responsible, capable of grasping and dealing with the challenges ahead, when it could hardly be any less so — but all that matters is that someone says they’re doing it.
Point out to a transhumanist that they are as a rule uninterested in deeply and seriously engaging with the ramifications of the technologies they propose, or suggest that the whole project is more unfathomably reckless than any ever conceived, and they can say, “but look, we are thinking about it, we’re paying our dues to caution — don’t worry, we’ve got people on it!” And with their consciences salved, they can go comfortably back to salivating over the future.

Transhuman Ambitions and the Lesson of Global Warming

Anyone who believes in the science of man-made global warming must admit the important lesson it reveals: humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them. Let’s call this (just for kicks) the Malcolm Principle. Our knowledge is little but our power is great, and so we must wield it with caution. Much of the continued denial of a human cause for global warming — beyond the skepticism merited by science — is due to a refusal to accept the truth of this principle and the responsibility it entails.


Lake Hamoun, 1976-2001,
courtesy UNEP

And yet a similar rejection of the Malcolm Principle is evident even among some of those who accept man’s role in causing global warming. This can be seen in the great overconfidence of climate scientists in their ability to understand and predict the climate. But it is far more evident in the emerging support for “geoengineering” — the notion that not only can we accurately predict the climate, but we can engineer it with sufficient control and precision to reverse warming.

It is unsurprising to find transhumanist support for geoengineering. Some advocates even support geoengineering to increase global warming — for instance, Tim Tyler advocates intentionally warming the planet to produce various allegedly beneficial effects. Here the hubris of rejecting the Malcolm Principle is taken to its logical conclusion: Once we start fiddling with the climate intentionally, why not subject it to the whims of whatever we now think might best suit our purposes? Call it transenvironmentalism.
In fact, name any of the most complex systems you can think of that were not created from the start as engineering projects, and there is likely to be a similar transhumanist argument for making it one. For example:
  • The climate, as noted, and thus implicitly also the environment, ecosystem, etc.
  • The animal kingdom, see e.g. our recent lengthy discussion on ending predation.
  • The human nutritional system, see e.g. Kurzweil.
  • The human body, a definitional tenet for transhumanists.
  • The human mind, similarly.
Transhumanist blogger Michael Anissimov (who earlier argued in favor of reengineering the animal kingdom) initially voiced support for intentional global warming, but later deleted the post. He defended his initial support with reference to Singularitarian Eliezer Yudkowsky’s “virtues of rationality,” particularly that of “lightness,” which Yudkowsky defines as: “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.” Yudkowsky’s list also acknowledges potential limits of rationality implicit in its virtues of “simplicity” and “humility”: “A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere,” and the humble are “Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.” Yet in addition to the “leaf in the wind” virtue, the list also contains “relinquishment”: “Do not flinch from experiences that might destroy your beliefs.”
Putting aside the Gödelian contradiction inherent even in “relinquishment” alone (if one should not hesitate to relinquish one’s beliefs, then one should also not hesitate to relinquish one’s belief in relinquishment), it doesn’t seem that one can coherently exercise all of these virtues at once. We live our lives interacting with systems too complex for us to ever fully comprehend, systems that have come into near-equilibrium as the result of thousands or billions of years of evolution. To take “lightness” and “relinquishment” as guides for action is not simply to be rationally open-minded; rather, it is to choose to reflexively reject the wisdom and stability inherent in that evolution, preferring instead the instability of Yudkowsky’s “leaf in the wind” and the brash belief that what we look at most eagerly now is all there is to see.
Imagine if, in accordance with “lightness” and “relinquishment,” we had undertaken a transhumanist project in the 19th century to reshape human heads based on the fad of phrenology, or a transenvironmentalist project in the 1970s to release massive amounts of carbon dioxide on the hypothesis of global cooling. Such proposals for systemic engineering would have been foolish not merely because of their basis in particular mistaken ideas, but because they would have proceeded on the pretense of comprehensively understanding systems they in fact could barely fathom. The gaps in our understanding mean that mistaken ideas are inevitable. But the inherent opacity of complex systems still eludes those who make similar proposals today: Anissimov, even in acknowledging the global-warming project’s irresponsibility, still cites but a single knowable mechanism of failure (“catastrophic global warming through methane clathrate release”), as if the essential impediment to the plan will be cleared as soon as some antidote to methane clathrate release is devised.
Other transhumanist evaluations of risk similarly focus on what transhumanism is best able to see — namely threats to existence and security, particularly those associated with its own potential creations — which is fine except that this doesn’t make everything else go away. There are numerous “catastrophic errors” wrought already by our failures to act with simplicity and humility — such as our failure to anticipate that technological change might have systemic consequences, as in the climate, environment, and ecosystem; and our tremendous and now clearly exaggerated confidence in rationalist powers exercised directly at the systemic level, as evident in the current financial crisis (see Paul Cella), in food and nutrition (see Michael Pollan and John Schwenkler), and in politics and culture (see Alasdair MacIntyre among many others), just for starters. But among transhumanists there is little serious contemplation of the implications of these errors for their project. (As usual, commenters, please provide me with any counterexamples.)
Perhaps Yudkowsky’s “virtues of rationality” are not themselves to be taken as guides to action. But transhumanism aspires to action — indeed, to revolution. To recognize the consequences of hubris and overreach is not to reject reason in favor of simpleminded tradition or arbitrary givenness, but rather to recognize that there might be purpose and perhaps even unspoken wisdom inherent in existing stable arrangements — and so to acknowledge the danger and instability inherent in the particular hyper-rationalist project to which transhumanists are committed.

The Crisis of Everyday Life

Over at The Speculist, Phil Bowermaster has fired a volley across our bow. His post contains a few misrepresentations of The New Atlantis and our contributors. However, we think our body of work speaks for itself, and so rather than focusing on Mr. Bowermaster’s sarcastic remarks, I’d like to comment on the larger substantial point in his post. In covering a talk at the Singularity Summit last weekend, I wrote the following:

[David] Rose says the FDA is regulating health, but he says “everyone in this room is going to hell in a handbasket, not because of one or two genetic diseases,” but because we’re getting uniformly worse through aging. And that, he says, is what they’re trying to stop. Scattered but voracious applause and cheering. It’s that same phenomenon again — this weird rally attitude of yeah, you tell ’em! Who is it that they think they’re sticking it to? Or what?

Bowermaster responds, “Gosh, I can’t imagine,” and contends that my question arises from the fact that “the New Atlantis gang … ha[s] a difficult time even imagining that the positions they routinely take on issues — being manifestly and self-evidently correct — could be seriously opposed by anyone, much less in a vocal and enthusiastic way.” He adds that my question appeared to be one of “genuine puzzlement.”
In the haste of blogging in real time, I may have failed to make clear that my question wasn’t expressing “genuine puzzlement,” but was rhetorical. But now, with the leisure to spell out my concerns more fully, I’d like to expand on the point I was trying to make — and thereby to address Mr. Bowermaster’s post.
The combative rhetoric of transhumanists
I posed my question — Who is it that they think they’re sticking it to? — not just in response to the specific scene I had just described, but because of the pervasive rally-like attitude at the conference. That sense of sticking it to an unnamed opponent was part of the way many presenters spoke. Their statements — however technical, mundane, or uncontroversial — were often phrased as jabs instead of simple declarations. They spoke as in defiance — but of adversaries who were not named, not present, and may not have even existed. (The worst example of this was in the stage appearances by Eliezer Yudkowsky, as I noted here and here. Official videos of the conference are not yet available, but the point will quickly become evident in any video of his talks you can find online.)
This combative tendency demands examination because it is so typical of transhumanist rhetoric in general. To take just one egregious example, consider this excerpt from a piece in H+ Magazine entitled “The Meaning of Life Lies in Its Suckiness.” This piece is more sarcastic and vulgar than most transhumanist writings, but its combativeness and resentment are fairly representative:

[Bill] McKibben will put on his tombstone: “I’m dead. Nyah-nyah-nyah. Have a nice eternal enhanced life, transhumanist suckers.” Ray Kurzeill [sic] will be sitting there with his nanotechnologically enhanced penis and wikipedia brain feeling like a chump. Whose life has meaning now, bitches? That’s right, the dead guy.

The combativeness of transhumanist rhetoric might be more justifiable if it emerged chiefly in arguments with critics dubious of the transhumanist project to remake humanity (or to “save the world,” or whatever the preferred rendering). But their combativeness extends far beyond direct responses to their critics. It is rather a fundamental aspect of their stance toward the world.
Take, for instance, the discussion I was blogging about in the first place. A member of the audience asked whether the FDA should revisit its definition of health; the speaker’s rally-like attitude (and the audience’s corresponding response) could not have been directed at anybody in particular, for the FDA has nothing to do with what either the questioner or the speaker were talking about. Both the question and answer were detached from reality, but the speaker acted as if the FDA were really shafting the American people, and he nursed the audience’s sense of grievance at their perceived loss.
The fault, dear Brutus…
Against whom, then, is their grievance directed? Or — as I suggested in my initial post — against what is it directed? The ultimate target of the unhappy conferencegoers’ ire was not the FDA. Nor does the H+ Magazine author I quoted above have much of a case against Bill McKibben. Rather, the grievance of the transhumanists is against human nature and all of its limitations. As my co-blogger Charles T. Rubin wrote of prominent transhumanists Hans Moravec and Ray Kurzweil, they “share a deep resentment of the human body: both the ills of fragile and failing flesh, and the limitations inherent to bodily life, including the inability to fulfill our own bodily desires.”
Despite tremendous advances in our health, longevity, and prosperity, man’s given nature keeps us in bondage — and the sense of urgency in the effort to slip loose those bonds paradoxically grows as we comprehend ever greater means of doing so.
Transhumanism’s combative stance derives from this sense of constant urgency — what Yuval Levin has dubbed “the crisis of everyday life.” The main target of the combativeness, then, is man’s limited nature; the transhumanists are warring against what they themselves are. Any anger directed at critics like Bill McKibben or the FDA is rather incidental.
The transhumanists’ stance might become more clear — or at least more honest — if they acknowledged that their resentment is more directed at their own human nature than at any particular humans. But to do so might imperil their position. For they might realize — if the history of which they are exemplary is any guide — that as their power grows, their resentment at the remaining limits will only deepen, and will increase their hunger for ever more power to chase those limits away.
If their power did allow them to vanquish the last of their limitations — if “man’s estate,” to borrow Francis Bacon’s phrase, were fully relieved — to what purposes would these posthumans then turn their power? What purpose would they find in their existence when the central reason they have now for living was at last fulfilled? Through what struggle would they flourish when their struggle against struggle itself was complete?

On persuasion and saving the world

The penultimate item on the agenda of the 2009 Singularity Summit is a panel discussion, on no particular topic, involving Aubrey de GreyEliezer Yudkowsky, and Peter Thiel. The moderator is Michael Vassar of the Singularity Institute. And it is in that order, from left to right, that the four men appear in this picture:
From left: Aubrey de Grey, Eliezer Yudkowsky, Peter Thiel, and Michael Vassar.
Vassar starts with a question about when each of the panelists realized they wanted to change the world. Thiel says he knew when he was young he wanted to be an entrepreneur, and once he found out about the Singularity, it was just natural to get on board with it and “save the world.”
Yudkowsky says, “Once I realized there was a problem, it never occurred to me not to save the world,” with a shrug and arms in the air. (Very scattered laughter and applause. The audience seems uncomfortable with him. I am, anyway. As I noted earlier, everything the guy says seems to drip with condescension, even in this room filled with people overwhelmingly on his side. He keeps having to invent straw men to put down as he talks.)
De Grey says he knows exactly when he realized he wanted to make a difference. It was when he was young and wanted to be a great pianist, but then realized that he’d spend all this time practicing — and then what? He’d just be another pianist and there are tons of those. So he decided he wanted to change a world. Then later he discovered no one was looking at stopping aging and he was horrified, so he decided to do that.
The moderator asks what each man would be working on if not the Singularity. De Grey says other existential risks besides aging. Yudkowsky says studying human rationality. (If only he would. A Twitterer seems to share my sentiments.) But he says it’s not about doing what you’re good at or want to do, but what you need to do. Thiel would be studying competition. Competition can be extremely good, he says, but can go way too far, and crush people. He says it was better for him as a youth that computers got better than chess, because he realized he shouldn’t be stressing himself so much over being a super-achieving chess player.
They get into talking about achievement a bit more later, and Thiel says he thinks it’s really important for people to have ways to persevere that aren’t necessarily about public success.
De Grey highlights the importance of “embarrassing people” to make them realize how wrong they are. We’re all aware of some of the things people say in defense of aging, he says. Thiel says his own personal bias is that that’s not a good approach, because there are so many different ways of looking at things, people have so many different cultural and value systems, and there may be deep-seated reasons they believe what they do. He says he likes to try hard to explain his points to people.
The rest of the discussion is not especially noteworthy. A bit of celebrity worship and ego stroking. Peter Thiel easily takes the cake for charm on this stage.

Rationalism, risk, and the purpose of politics

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Eliezer Yudkowsky at the 2009 Singularity Summit in New York CityEliezer Yudkowsky, a founder of the Singularity Institute (organizer of this conference), is up next with his talk, “Cognitive Biases and Giant Risks.” (Abstract and bio.)
He starts off by talking about how stupid people are. Or, more specifically, how irrational they are. Yudkowsky runs through lots of common logical fallacies. He highlights the “Conjunction Fallacy,” where people find a story more plausible when it includes more details, when in fact a story becomes less probable when it has more details. I find this to be a ridiculous example. Plausible does not mean probable; people are just more willing to believe something happened when they are told that there are reasons that it happened, because they understand that effects have causes. That’s very rational. (The Wikipedia explanation, linked above, has a different explanation than Yudkowsky’s that makes a lot more sense.)
Yudkowsky is running through more and more of these examples. (Putting aside the content of his talk for a moment, he comes across as unnecessarily condescending. Something I’ve seen a bit of here — the “yeah, take that!” attitude — but he’s got it much more than anyone else.)
He’s bringing it back now to risk analysis. People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.
This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).
[One of Yudkowsky’s slides.]
Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.
That seems at first to be an important point: We don’t plan on a large scale nearly as well or as rationally as we might. But just off the top of my head, Yudkowsky’s approach raises three problems. First, we do not all agree on what existential threats are; that is what politics and persuasion are for; there is no set of problems that everyone thinks we should spend money on; scientists and technocrats cannot answer these questions for us since they inherently involve values that are beyond the reach of mere rationality. Second, Yudkowsky’s depiction of humans, and of human society, as irrational and stupid is far too simplistic. And third, what’s so wrong with spending money on football? If we spent all our money on forestalling existential threats, we would lose sight of life itself, and of what we live for.
Thus ends his talk. The moderator notes that video of all the talks will be available online after the conference; we’ll post links when they’re up.