There Is No ‘Undo’ Button for the Singularity

As a matter of clearing up the record, I’d like to point out a recent post by Michael Anissimov in which he points out that his blog’s server is still infested with malware. The post concludes:

I don’t know jack about viruses or how they come about. I suppose The New Atlantis will next be using that as evidence that a Singularity will never happen. Oh wait — they already did.

[UPDATE: Mr. Anissimov edited the post without noting it several times, including removing this snarky comment, and apparently, within the last hour or two, deleting the post entirely; see below.]Mr. Anissimov is referring to two posts of mine, “Transhumanist Tech Failures” and “The Disinformation Campaign of Transhumanist ‘Caution’.” But even a passing glance at either of these posts will show that I never used this incident as evidence that the Singularity will never happen. Instead, it should be clear that I used it, rather opportunistically, to point out the embarrassing fact that the hacking of his site ironically reveals the deep foolhardiness of Mr. Anissimov’s aspirations. Shameless, I know.It’s not of mere passing significance that Mr. Anissimov admits here that he “[doesn’t] know jack about viruses or how they come about”! You would think someone who is trying to make his name on being the “responsible” transhumanist, the one who shows up the need to make sure AI is “friendly” instead of “unfriendly,” would realize that, if ever there comes into existence such a thing as unfriendly AI — particularly AI intentionally designed to be malicious — computer viruses will have been its primordial ancestor, or at least its forerunner. Also, you would think he would be not just interested in but actually in possession of a deep and growing knowledge of the practical aspects of artificial intelligence and computer security, those subjects whose mastery are meant to be so vital to our future.I know we Futurisms guys are supposedly Luddites, but (although I prefer to avoid trotting this out) I did in fact graduate from a reputable academic computer science program, and in it studied AI, computer security, and software verification. Anyone who properly understands even the basics of the technical side of these subjects would laugh at the notion of creating highly complex software that is guaranteed to behave in any particular way, particularly a way as sophisticated as being “friendly.” This is why we haven’t figured out how to definitively eradicate incomparably more simple problems — like, for example, ridding malware from servers running simple blogs.The thing is, it’s perfectly fine for Mr. Anissimov or anyone else who is excited by technology not to really know how the technology works. The problem comes in their utter lack of humility — their total failure to recognize that, when one begins to tackle immensely complex “engineering problems” like the human mind, the human body, or the Earth’s biosphere, little errors and tweaks in the mind, gaps in your knowledge that you weren’t even aware of, can translate into chaos and catastrophe when they are actually applied. Reversing an ill-advised alteration to the atmosphere or the human body or anything else isn’t as easy as deleting content from a blog. It’s true that Mr. Anissimov regularly points out the need to act with caution, but that makes it all the more reprehensible that he seems so totally disinclined to actually so act.—Speaking of deleting content from a blog: there was for a while a comment on Mr. Anissimov’s post critical of his swipe at us, and supportive of our approach if not our ideas. But he deleted it (as well as another comment referring to it). He later deleted his own jab at our blog. And sometime in the last hour or two, he deleted the post entirely. All of these changes were done without making any note of them, as if he hopes his bad ideas can just slide down the memory hole.We can only assume that he has seen the error of his ways, and now wants to elevate the debate and stick to fair characterizations of the things we are saying. That’s welcome news, if it’s true. But, to put it mildly, silent censorship is a fraught way to conduct debate. So, for the sake of posterity, we have preserved his post here exactly as it appeared before the changes and its eventual deletion. (You can verify this version for yourself in Yahoo’s cache until it updates.)—A final point of clarification: We here on Futurisms are actually divided on the question of whether the Singularity will happen. I think it’s fair to say that Adam finds many of the broad predictions of transhumanism basically implausible, while Charlie finds many and I find a lot of them at least theoretically possible in some form or another.But one thing we all agree on is that the Singularity is not inevitable — that, in the words of the late computer science professor and artificial intelligence pioneer Joseph Weizenbaum, “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.”Rather, the future is always a matter of human choices; and the point of this blog is that we think the possibility of humans choosing to bring about the Singularity would be a pretty bad one. Why? We’ve discussed that at some length, and we will go on doing so. But a central reason has to be practical: if we can’t keep malware off of a blog, how can we possibly expect to be able to maintain the control we want when our minds, and every aspect of our society, is so subject to the illusion of technical mastery?With that in mind, we have much, much more planned to say in the days, weeks, and months ahead, and we look forward to getting back to a schedule of more frequent posting now that we’re clearing a few major deadlines off our plates.

Revolution! — Within Reason

What a difference a day makes! On Tuesday, Michael Anissimov posted a plea to his readers to aid the Existential Risk Reduction Career Network — either by “[joining] an elite group of far-sighted individuals by contributing at least 5% of your income” or, “for those who wish to make their lives actually mean something,” by finding a job through the network. Who’d have thought you could make your life mean something by becoming an existentialist?At any rate, he took something a beating in the comments (“Harold Camping called, he wants his crazy back,” said one), but I think people might as well put their money where their mouths are. That’s how interest-group politics works in American liberal democracy; it’s part of the give and take of public debate and the way in which decisions get made. Why existential risk reduction would not include a healthy dose of criticism of transhumanism is another matter, but I was happy to see Mr. Anissimov seeming to be sensible with respect to one of the routes for how the transhumanist cause is going to have to get ahead in the public arena.Just shows how wrong a guy can be. On Wednesday, Mr. Anissimov published a brief critique of a rather thoughtful essay by Charles Stross, one of the great writers of Singularity-themed science fiction. Mr. Stross expresses some skepticism about the possibility of the Singularity, but Mr. Anissimov would have none of it, particularly when Mr. Stross dares to suggest that there might be reasons to heavily regulate AI research. Mr. Anissimov thunders:

We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

(Now I understand why Bond movie villains end up somewhere in mid-ocean.) He continues:

WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it. [Emphases in original.]

Take that, Charles Stross: just you try to stop us!! Mr. Anissimov makes the Singularity look a lot like Marx’s communism. We don’t know quite what it’s going to look like, but we know we have to get there. And we will do anything “within reason” to get there. Of course, what defines the parameters of “within reason” is the alleged necessity of reaching the goal; as the Communists found out, under this assumption “within reason” quickly comes to signify “by any means necessary.” Welcome to the logic of crusading totalitarianism.

The Disinformation Campaign of Transhumanist “Caution”

In my last post on ironic transhumanist tech failures, there was one great example I forgot to mention. If you subscribe to the RSS feed for the IEET blog, you may have noticed that most of their posts go up on the feed multiple times: my best guess is that, due to careless coding in their system (or a bad design idea that was never corrected), a post goes up as new on the feed every time it’s even modified. For example, here’s what the feed’s list of posts from early March looks like:
Ouch — kind of embarrassing. Every project has technical difficulties, of course, but — well, here’s another example:
Question: can we develop and test machine minds and uploads ethically? Well, one way to get at that question is to ask what it might say about technical fallibility when such a prominent transhumanist advocacy organization has not yet figured out how to eliminate inadvertent duplicates on its RSS feed, and how such an error might play out when, say, uploading a mind, where the technical challenges are a bit more substantial, and the consequences of accidentally creating copies a bit more tricky.
Don’t get me wrong — we all know that the IEET is all about being Very Serious and Handling the Future Responsibly. I mean, look, they’re taking the proper precaution of thinking through the ethics of mind uploading long before that’s even possible! Let’s have a look at that post:

Sometimes people complain that they “did not ask to be born.” Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical. The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee.

Probably the most famous example of the “complaint” Ms. Rothblatt alludes to comes from Kurt Vonnegut’s final novel, Timequake, in which he depicts Hitler uttering the words, “I never asked to be born in the first place,” before shooting himself in the head. It doesn’t seem that either fictional-Hitler’s or real-Vonnegut’s complaint was answered satisfactorily by their parents’ “implicit biological consent” to their existence. And somehow it’s hard to imagine that either man would have been satisfied if an ethics committee had rendered the judgment instead.
Could Vonnegut (through Hitler) be showing us something too dark to see by looking directly in its face? Might these be questions for which we are rightly unable to offer easy answers? Is it possible that those crutches of liberal bioethics, autonomy and consent, are woefully inadequate to bear the weight of such fundamental questions? (Might it be absurd, for example, to think that one can write a loophole to the “temporal illogicality” of consenting to one’s own existence by forming a committee?) Apparently not: Rothblatt concludes that “I think practically speaking the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution” and “Ultimately … the seeming catch-22 of how does a consciousness consent to its own creation can be solved.” Problem solved!
—-
In a similar vein, in response to my shameless opportunism in my last post in pointing out the pesky ways that technical reality undermines technological fantasy, Michael Anissimov commented:

In my writings, I always stress that technology fails, and that there are great risks ahead as a result of that. Only transhumanism calls attention to the riskiest technologies whose failure could even mean our extinction.

True enough. Of course, only transhumanism so gleefully advocates the technologies that could mean our extinction in the first place… but it’s cool: after his site got infested by malware for a few days, Anissimov got Very Serious, decided to Think About the Future Responsibly, and, in a post called “Security is Paramount,” figured things out:

For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex…. This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation…. Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn’t go too well for the weak — just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.

Hey, did you know nature is bad and people can be pretty bad too? Getting your blog knocked offline for a few days can inspire some pretty cosmic navel-gazing. (As for the last part, though, it shouldn’t be a worry, as Hitler+ and Stalin+ will have had ethics committees who consented to their existences, and all their existential issues thereby solved.)
—-
The funny thing about apocalyptic warnings like Anissimov’s is that they don’t seem to do a mite to slow down transhumanists’ enthusiasm for new technologies. Notably, despite his Serious warnings, Anissimov doesn’t even consider the possibility that the whole project might be ill-begotten. In fact, despite implicitly setting himself outside and above them, Anissimov is really one of the transhumanists he describes in the same post, who “see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly.” This is because, for all his lofty rhetoric of caution, he is still fundamentally credulous when it comes to the promise of transformative new technologies.
Take geoengineering: in Anissimov’s first post on the subject, he cheered the idea of intentionally warming the globe for certain ostensible benefits. Shortly later, he deleted the post “because of substantial uncertainty on the transaction costs and the possibility of catastrophic global warming through methane clathrate release.” It took someone pointing to a specific, known vector of possible disaster for him to reconsider; otherwise, only a few minutes’ thought given to what would be the most massive engineering project in human history was sufficient to declare it just dandy.
Of course, in real life, unlike in blogging, you can’t just delete your mistakes — say, releasing huge amounts of chemicals into the atmosphere that turn out to be harmful (as we’re learning today when it comes to carbon emissions). Nor did it occur to Anissimov that the one area on which he will readily admit concern about the potential downsides of future technologies — security — might also be an issue when it comes to granting the power to intentionally alter the earth’s climate to whoever has the means (whether they’re “friendly” or not).
—-
One could go on at great length about the unanticipated consequences of transhumanism-friendly technologies, or the unseriousness of most pro-transhumanist ethical inquiries into those technologies. These points are obvious enough.
What is more difficult to see is that Michael Anissimov, Martine Rothblatt, and all of the other writers who proclaim themselves the “serious,” “responsible,” and “precautious” wing of the transhumanist party — including Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil, among others — in fact function as a sort of disinformation campaign on behalf of transhumanists. They toss out facile work that calls itself serious and responsible, capable of grasping and dealing with the challenges ahead, when it could hardly be any less so — but all that matters is that someone says they’re doing it.
Point out to a transhumanist that they are as a rule uninterested in deeply and seriously engaging with the ramifications of the technologies they propose, or suggest that the whole project is more unfathomably reckless than any ever conceived, and they can say, “but look, we are thinking about it, we’re paying our dues to caution — don’t worry, we’ve got people on it!” And with their consciences salved, they can go comfortably back to salivating over the future.

Transhumanist Tech Failures

Every organization experiences technical difficulties now and then; that’s just a fact of technology. But there is always a delicious irony when it happens to transhumanists, those starry-eyed prognosticators of unfathomable technical power and absolute technical mastery.
This week’s serving of ironic technical failure comes from one of our most reliable sources of easy material, Michael Anissimov. On Monday this ignominious post appeared on the RSS feed for his blog, Accelerating Future:
You can click the screenshot to enlarge it, but in case you can’t read it, it says “I understand that my server is infected with malware and has been flagged by Google, I’m currently in the process of backing everything up and reinstalling. It’s not that simple of a task so please be patient.” Rough times. I’d link to his site, but evidently the malware remains and it’s not safe to visit; if you go in Google Chrome, you’ll see this:
What would the technical term for that be — an infestation of unfriendly un-AI?
This is not an isolated incident, of course. Until recently, the RSS feed for H+ Magazine had some pretty impressive screw-ups on a regular basis. I happen to have taken some screenshots of my favorites (these are all real, and there are many more like these):
And probably the best (this one, I believe, is from the site itself):
(Sic on Phil B[r]owermaster.) One might wonder about the wisdom of entrusting Humanity+ to people who can’t seem to figure out HTML.
One last example. In doing a wrapup post on the first day of the H+ Summit last summer, I noted:

The talks on the first day were plagued by various technical problems, particularly on Apple computers, that delayed the presentations. The organizers joke this off by noting that at least it’s not as bad as Steve Jobs’s recent embarrassment with Apple products not working at an Apple conference. Yeah, except Steve Jobs is only suggesting that we purchase his computers, not that we literally live in them.

I wanted to dig up some video clips to actually show you what I was talking about, but when I went to the streaming video feed for the conference and clicked “More Videos” to see if there was some sort of archive, this — no joke — is what I found:
[UPDATE: See the follow-up post here.]

Kitty minus kitty

In my last post, I noted the problems with Michael Anissimov’s attempt to defend “morphological freedom” as following from the civil rights movement. I described the way racism has been historically combated by appealing to what we have in common. This is an inherent problem with comparing “species-ism” to racism, because racism is combated precisely by appealing to our common humanity — that is, to our common species.
But it’s worth noting that a similar point holds when we look at an existing, non-hypothetical debate about interspecies rights and difference: the animal-rights debate. If we apply Mr. Anissimov’s “morphological freedom” argument to that debate, we again find it pretty lacking: Advocates of animal rights don’t argue that we should treat, say, a pig with respect or kindness because it “has a right to be a pig,” but rather because we should empathize with the way that, like us, a pig is intelligent (after a fashion) and has emotions and the capacity for suffering.
In fact, Mr. Anissimov, like many transhumanists, considers himself to be continuing the movement for animal rights in addition to civil rights. It’s all part of the ostensible transhumanist benevolence outreach, the grand quest to end suffering. But their formulation of this is to “reprogram” animals so as to end predation. Cats could go on being cat-like in some way, but we have an obligation to remake them so that they no longer hunt and kill. But have a look at this:
Where is the line here between the feline instincts to hunt and play? Is the hunting aspect of a cat something wholly separable from its nature, something that can be cleanly excised? Isn’t a cat minus its hunting instinct a cat minus a cat?
The suggestion of a project to end predation illustrates the transhumanist inclination to see living beings as simply a collection of components that have no logical dependencies on each other — as independent parts rather than wholes. But, more to the point, it makes the question of morphological freedom a pressing one for transhumanists themselves, who before undertaking such a project would quite seriously have to confront the question, “does a cat have a right to be a cat?”

Are humanists the new racists?

Our last post, simply a picture of a joyous Audrey Hepburn leaping in the air with the title “Does Anybody Seriously Think We Can Do Better than This?,” provoked a long comment thread. Michael Anissimov posted a comment (and then reposted it on his own blog with a short response to it):

Our evaluations of “goodness” are not objective truths, just subjective facts about the structure of our own minds. The opportunity to modify and enhance those minds will vastly increase the space of things we can understand and appreciate. This will allow us to create new forms of attractiveness and wonder that we lack the facilities to appreciate now.

Commenter Brendan Foht notes that Mr. Anissimov’s line of argument “is at the crux of the most radical aspects of transhumanism,” and neatly explains its contradiction:

When ‘goodness’ is made completely contingent on the structure of our historically/biologically conditioned minds, we make room for the possibility of new kinds of goodness, if we alter the historical or biological conditions that structure our minds…. [But] if our concepts of goodness are structured by our current situation, what reasons could we… have for choosing new kinds of goodness?

The paradox Foht points out is a fundamental one for transhumanists. They face the necessary task of destroying existing value systems, but they always seem to attempt this task by neutralizing values as such, declaring them arbitrary, contingent, publicly unsettleable, a matter of personal choice, etc. The problem is that the value systems they are attempting to set forth as higher alternatives are then necessarily also undercut. As I’ve noted before (here and here), if transhumanists succeed in removing the reasons we shouldn’t embrace some modification, they then leave us without any reasons why we should. In short, the inherent problem with arguing for relativism is that you can’t convince anyone it’s better.
Oh, yeah, he went there
Transhumanists are not truly relativists, however; they just have a warped value system, the deep incoherence of which often leads them to fall back on relativistic arguments in place of direct arguments for why their goods ought to replace normal human ones. If they were truly relativists, their writing would not betray the high-minded moral posturing that it does. Take this part of the same comment from Mr. Anissimov (as continued in his re-posting of it):
[E]ven though I’m favor of morphological freedom (rather than the morphological fascism that I have to look and think a certain specific way, the way it’s been for over 200K years) [that] doesn’t mean that I discourage people from rejecting transhumanism entirely and living only among other humans…. Today, for instance, there are some people that only choose to live among their own race, for fear that race-mixing leads to irrevocable societal chaos. It is only natural to fear that species-mixing in a society could lead to problems, but I’ll bet that some combinations of species could lead to a harmonious equilibrium.
Yes, I went there…. Conservatives seem to often believe in the hypothesis that [the] more we’re alike, the better we can get along. Liberals argue that we can get along despite our diversity.
I guess that makes us “morphological fascists.” Which one of the Futurisms bloggers do you suppose is morphological Mussolini? Or is it that we’re the morphological equivalent of racists, and Mr. Anissimov is the morphological Martin Luther King?
It’s hard to know where to begin with this sort of uninformed and unserious argument, teeming with straw men. You might start by wondering who are the people that Mr. Anissimov claims fear living among other species — even though we have always lived among other species. (The problem, of course, is that transhumanists want to create new species of such higher intelligence than ours that they might relate to us in ways akin to how we relate to dogs, cows, or mosquitoes.) You might also wonder what his comment has to do with Charles Rubin’s original question of whether “we can do better” than that image of Audrey Hepburn, a picture that Rubin says shows us “not the peak of human history or existence, but…does show us a peak of human experience.”
What Mr. Anissimov seems to be getting at is that to affirm the unsurpassable beauty of the Hepburn photo is to commit a sort of discrimination, or species-racism. He wants to turn Rubin’s question on its head: To say that we couldn’t do better is to say that a member of any other species would be less beautiful, which is the same as saying that a member of any other race would be less beautiful.
But the actual history of the fight against racism reveals a picture very different from the one he implies, in which people somehow came to appreciate that race is contingent and so we should not begrudge each race for appreciating its own as best. On the contrary, Martin Luther King and others fought for equality by illustrating our commonality rather than our differences — by demonstrating that all races are equally human, possessing of equal human dignity, and so ought to be treated with equal respect. We can “get along despite our diversity” because we are human, in a way that, say, we should not expect humans and insects to get along despite their diversity.
Finally, I can’t resist noting Mr. Anissimov’s goofily self-congratulatory depiction of transhumanism as the sort of thing that kids will experiment with, perhaps when they go off to college: “I do, however, think that children should be able to do what they want with themselves after a certain age, and I doubt that Christian conservative parents will be able to stop their curious and neophilic children from embracing transhumanist technologies.” This brings to mind the following comic, which he would seem to have to think should be taken seriously (click to enlarge):

Is Transhumanism a Religion?

In late April, blogger Michael Anissimov claimed that we are all transhumanists now, in part because

At their base, the world’s major two largest religions — Christianity and Islam — are transhumanistic. After all, they promise transcension from death and the concerns of the flesh, and being upgraded to that archetypical transhuman — the Angel. The angel will probably be our preliminary model as we seek to expand our capacities and enjoyment of the world using technological self‑modification.

Just a few days ago, on the other hand, Mr. Anissimov observed that “When theists call the Singularity movement ‘religious,’ they are essentially saying, ‘Oh no, this scientifically‑informed philosophy is intruding on our traditional turf!’”

My point in juxtaposing these two passages is not only to suggest that it is not fully clear just who is intruding on whose turf, but also to suggest that the whole issue seems to be miscast. Back when I was writing about environmentalism I came across those who thought environmentalism was somehow a religion, and for that reason alone deeply problematic. That they would often speak of it as a “secular religion” already struck me as odd, not quite like talking about a “square circle” but close.

In response, I paraphrased a passage from T. S. Eliot (“Our literature is a substitute for religion, and so is our religion”) to suggest that if environmentalism is a substitute for religion, it is because our religion is already a substitute for religion. Something of the same idea applies to transhumanism in its various forms. If it looks like a religion, that is probably because so many have a pretty degraded conception of what “religion” — and here I speak of the Biblical religions — is all about. Let me suggest it is not fundamentally angels.

At root, Biblical religion is about there being a God who created the world, is active in the world, and has expectations about how people should behave in the world. At root, transhumanism is not about any of these things; so far as I can tell for most transhumanists there is no God and we are the only source of expectations about how we should live in the world. It is very hard for me to understand in what sense one of these belief systems can substitute for another. True, both of them can be strongly held, both of them can serve as a guide to life. You can even say both depend on faith, to the extent that a good deal of transhumanism depends on evidence of what is as yet unseen. So I suppose that if you define religion as “a strongly held guide to life that depends on faith” then you can have a secular religion and it could be transhumanism.

But that definition seems to me to miss the point — like saying that Coke can serve as a substitute for red wine because both of them are dark-colored and drinkable liquids. Whatever their similarities, transhumanism and religion simply do not play the same part in the moral economy of human life. They strive for different ends and as a result they admire different qualities. For example, transhumanism is all about pride, while Biblical religions point to humility. Eliot once again seems to have the clearer understanding of what is at stake:

Nothing in this world or the next is a substitute for anything else; and if you find you must do without something, such as religious faith or philosophic belief, then you must just do without it. I can persuade myself … that some of the things that I can hope to get are better worth having than some of the things I cannot get; or I may hope to alter myself so as to want different things; but I cannot persuade myself that it is the same desires that are satisfied, or that I have in effect the same thing under a different name.

Transhumanism has indeed decided that it can do without religious faith and philosophy, hitching its wagon to willful creativity and calling it science. In contrast, binding discipline is at the root of Biblical religion. You do not have to want that discipline or believe in it to see that transhumanism is not just offering us the same kind of thing under a different name.

Transhumanist Inevitability Watch

Transhumanists have a label — “the argument from incredulity” — for one kind of criticism of their visions and predictions: The instinctual but largely un-evidenced assertion that transhumanist claims are simply too fantastical and difficult to fathom and so must be false. While there’s plenty of reason, empirical and otherwise, to doubt transhumanist predictions, they’re certainly right to point out and criticize the prevalence of the argument from incredulity.
But there’s a transhumanist counterpart to the argument from incredulity: the argument from inevitability. This argument is prone to be just as un-evidenced, and at least as morally suspect. So I’d like to begin a new (hopefully regular) series on Futurisms: the Transhumanist Inevitability Watch.

Or are we?

Our first entry comes from transhumanist blogger Michael Anissimov:

It’s 2010, and transhumanism has already won. Billions of people around the world would love to upgrade their bodies, extend their youth, and amplify their powers of perception, thought, and action with the assistance of safe and tested technologies. The urge to be something more, to go beyond, is the norm rather than the exception…. Mainstream culture around the world has already embraced transhumanism and transhumanist ideals.

Well, then! Empirical evidence, maybe?

All we have to do is survive our embryonic stage, stay in control of our own destiny, and expand outwards in every direction at the speed of light. Ray Kurzweil makes this point in The Singularity is Near, a book that was #1 in the Science & Technology section on Amazon and [also appeared] on the NYT bestsellers list for a reason.

Ah. Well, if we’re going to use the bestseller lists as tea leaves, right now Sean Hannity’s Conservative Victory is on the top of the Times list, and Chelsea Handler’s Are You There, Vodka? It’s Me, Chelsea is #2. Does this mean conservatism and alcoholism have also already won?
Similarly, his other major piece of evidence is that it would be “hard for the world to give transhumanism a firmer endorsement” than making Avatar, a “movie about using a brain-computer interface to become what is essentially a transhuman being,” the highest-grossing film of all time. Okay, then surely the fact that the Pirates of the Caribbean and Harry Potter movies occupy five of the other top 10 spots means even firmer endorsements of pirates and wizards, no? And actually, Avatar only ranks 14th in inflation-adjusted dollars in the U.S. market, far behind the highest-grossing film, which, of course, is Gone with the Wind — unassailable evidence that sexy blue aliens aren’t nearly as “in” as corsets and the Confederacy, right?
Mr. Anissimov’s post at least contains his usual sobriety and caution about the potentially disastrous effects of transhumanism on safety and security. But he and other transhumanists would do well to heed the words of artificial intelligence pioneer Joseph Weizenbaum in his 1976 Computer Power and Human Reason:

The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.

Keep Weizenbaum’s words in mind as we continue the Inevitability Watch. Humanity’s future is always a matter of human choice and responsibility.
UPDATE: Here’s another good example from Anissimov:

Transhumanist issues are obscenely mainstream nowadays, who even cares. We’re not even edgy anymore. The excitement is over. It’s time to start racing towards a safe intelligence explosion so we can end the Human-only Era once and for all. Let’s just get it over with.

“Transhumanists Have a Problem”

In a post that went up on his blog over the weekend, Michael Anissimov sketched out what he considers a potentially serious problem in transhumanist thinking, and he credits this blog, and particularly an important essay by Professor Rubin, with spurring his thinking.

There is much in Mr. Anissimov’s post that we disagree with. There is also a heap of, shall we say, odd reasoning. (To pick just one example, he finds it “unacceptable” that the human body cannot withstand “rifle bullets without severe tissue damage.” But of course bullets hurt us; that is what they are designed to do.) But all in all, we’re happy to help set Mr. Anissimov on the right path, and it is encouraging to see him concede that there are valid criticisms of transhumanism and that there are problems in transhumanist thinking. Here’s hoping that more of his ideological comrades follow his lead.

Anything is possible: The Singularitarian’s trump card

In response to the previous post here, asking what humanity might be like today if transhumanists had remade man in the 1950s, Michael Anissimov asks, “if we modified ourselves into this based on the ideology of the 50s, couldn’t we just then change it again if we didn’t like it?” This comment merits some attention because it exhibits one of the most common transhumanist tropes — a supposed discussion-ender.
Sure, one can claim that all such morphological decisions will eventually be completely reversible. One can claim that we will be able to change our forms just as easily as flipping a light switch. One can claim that people will be able to make choices without the slightest effect on other people, and that each generation can make choices that don’t impinge on the next.
But what reason is there to believe any of these things are possible? And even if they were possible, what are we to do in the meantime with a world in which they are not? And more to the point, why bother discussing futurism at all if we can supposedly do anything we want without any necessary consequences or limitations?
A defining feature of Singularitarianism is its basis in a fantasy world in which anything is possible (or at least, in which we have no way knowing for sure what isn’t). This gives Singularitarians a way of wriggling out of any argument by saying that no matter what the potential problem, we’ll be able to find a way around it (or at least, we don’t know for sure that we won’t).
I’m not sure if this is an argument from eventual omniscience/omnipotence that is tantamount to an argument from present infallibility, or if it is just an argument from the impossibility of proving a universal negative. One way or another, this is something to the effect of: Hey, why not jump off this cliff? I can’t see the bottom, but it sure looks great, and if we see any problems we can course-correct in mid-air. Which doesn’t make for great conversation.