Beware Responsible Discourse

I’m not sayin’, I’m just sayin’.

Another day, another cartoon supervillain proposal from the Oxford Uehiro “practical” “ethicists”: use biotech to lengthen criminals’ lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

…[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate…. Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?…

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world … or, perhaps, to exile in a computer simulated world.

….research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, “Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!” Here’s that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence … so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it’s because extending prisoners’ lives to punish them longer might be letting them off easier than putting them to death.


Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn’t — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

It’s important to assess the ethics *before* the technology is available (which is what we’re doing).” 

There’s a difference between considering the ethics of an idea and endorsing it.” 

… people sometimes have a hard time telling the difference between considering an idea and believing in it …

I don’t endorse those punishments, but it’s good to explore the ideas (before a politician does).

What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating.

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely “considering” and “exploring” and “debating” and “assessing” new punitive proposals. In response to my tweet about this…

…a colleague who doesn’t usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch’s, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It’s the same from doping the populace to be more moral, to shrinking people so they’ll emit less carbon, to “after-birth abortion,” and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn’t, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you’re just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you’re just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they’re under their own rhetorical spell. But let’s be frank about the work these discussions are really doing, how they’re aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they’re really after is focusing us so intently on this path that we forget we could yet still take another.

Un-Mainstreaming Human Enhancement

Chris Kim @ NYT

America’s Grey Lady, the New York Times, has long been willing to take transhumanist topics seriously, perhaps in some hope that she too will be somehow rejuvenated. Indeed, a recent piece by David Ewing Duncan on human enhancement has something of the aura of a second childhood about it, with its relatively breathless and uncritical account of the various promising technologies of enhancement in the works. There follows the stock paragraph noting with remarkable brevity the safety, distributional, political and “what it means to be human” issues these developments might create, before Duncan really gets to the core of the matter: “Still, the enhancements are coming, and they will be hard to resist. The real issue is what we do with them once they become irresistible.”

Here at Futurisms, we were not unaware that human enhancements may be hard to resist. Speaking only for myself, however, I can add that there are all kinds of things I find hard to resist. It was hard to resist the desire to stay in bed this morning, hard to resist the desire for dark chocolate last night. It is hard to resist the temptation not to grade student papers just yet, hard to resist the urge to make a joke. I’m sure I need not go on. We all face things that are hard to resist on a daily basis. It requires motivation and discipline to resist them, and sometimes we have it and sometimes we don’t. Mostly, however, we have it, at least where it counts most, or our lives together would be far more difficult than they already are.

By saying in effect that because enhancements are coming and the “real issue” is what to do about them when “they become irresistible,” Duncan is really saying he sees no reason to resist what is hard to resist, no reason to think that the question of human enhancement might be linked to self-control in any sense other than willful self-creation. That is a pretty strong form of technological determinism. Under the posited circumstances, of course enhancements will become irresistible, because we will have made no effort, moral or intellectual, to resist them. But should that situation arise, how will it be possible to decide “what we do with them”? If the underlying principle is “resist not enhancements” then the only answer to the question “what do we do with them” can be “whatever any of us wants to do with them.” Under these circumstances, even Duncan’s anodyne concerns about issues of safety, distribution, politics and “what it means to be human” will go out the window. After all, it is my body, my life, my money, my choice, my will, my desire, that will be the important things.

Duncan reports that when he asks parents whether they would give their children a memory-boosting drug if everybody else were doing it, most reply yes. But that is hardly interesting; if most people are doing anything, it will be hard for a few to say no. What is more noteworthy is where he begins his questioning:

I have asked thousands of people a hypothetical question that goes like this: “If I could offer you a pill that allowed your child to increase his or her memory by 25 percent, would you give it to them?” The show of hands in this informal poll has been overwhelming, with 80 percent or more voting no.

That is to say, most people he has asked at least say they think they would resist the temptation to give their child such a pill. If these healthy inclinations can be supported by social consensus buttressed by a variety of good reasons, perhaps enhancement will not be so hard to resist after all.

The Varieties of Transhumanist Experience

My last post, “Seven Scenarios for the Decline of Transhumanism,” prompted a number of comments. One in particular seems to get at the spirit of the general criticism of the others, and so to merit a response. Commenter gwern notes:

It doesn’t need to win on every possible front against every possible enemy. The overall trend is what matters.

The question is, what “it” are we talking about? If nothing else, the comments on this post, and on this blog generally, suggest something that people both inside and outside the transhumanist movement have long been aware of: it would be more accurate to speak of “transhumanisms” rather than “transhumanism,” at least to the extent that the latter implies a degree of unity that does not in fact exist.Plainly one can consider oneself a transhumanist and readily disavow what somebody else considers transhumanism. I am not, for the moment, attempting any criticism of this sectarianism; but it does mean that it is hard to discern an “overall trend” because the most significant trendline for one transhumanist may be pointless to another. Hence my effort at disaggregation: my aim was to highlight the technologies that self-identified transhumanists typically use to suggest how the seeds for their desired future are already being sown in the present. Will it make no difference if these technologies don’t take off?But perhaps I am making things too difficult. Perhaps one can just say that transhumanism is all about using our seemingly ever-increasing powers over nature to take control of human evolution — using our intelligence to build a better human being or to transcend humanity altogether. For the sake of unity, we will try to avoid defining “better,” and let each decide for himself (although, as James Hughes has acknowledged, not all transhumanists are this libertarian). At any rate, if we operate at this level of admittedly problematic generality, then what is the “overall trend”? Looked at in this way, the tide does seem to be coming in for transhumanism: we do indeed seem to have ever increasing power over nature. So much for those cranky “bioconservatives”?Not exactly. For, at the moment, anyway, when it comes to building a better human being or transcending humanity altogether, there is no trend strictly speaking, because nobody actually knows how to do it. It is a narrowly held dream, an aspiration, a hope, a wish — not a trend. And even if transhumanist dreams or aspirations are held by increasing numbers of people, the mere aggregation of dreams is not sufficient for turning them into realities. Of course, various people have various thoughts about how the dream might be turned into a reality, but these remain but big ideas. A day may come when one of those big ideas bears fruit, and the time of men will begin to pass away. But this is not that day.I acknowledged at the start of my previous post that transhumanism may in some sense never disappear. But that does not mean it has to grow. So that is why, despite gwern’s rolling eyes, I do not regret having highlighted some small things that could be indicative of the normative aspects of society and culture that might serve to undermine the salience of transhumanism. Sometimes all it takes to wake the dreamer is a gnat in the ear.

Seven Scenarios for the Decline of Transhumanism

Many of the things that transhumanism aspires to, like greatly extended life or special abilities, are not really new; expressing dissatisfaction with the human condition by rejecting some of its limits seems to be a perennial human possibility. So it is possible that something like transhumanism at least will never die, so long as there are people in the world who can imagine things being different from what they are. However, in its current manifestation it may be subject to just the sort of decline into quaint obscurity that has been the fate of previous versions of its ideas. So, in the helpful spirit of Kyle Munkittrick’s “When Will We Be Transhuman? Seven Conditions for Attaining Transhumanism” I would like to present seven scenarios that would conduce to its growing irrelevance.1. Recent concerns about too-skinny models, increasing interest in exposing Photoshopped versions of already-beautiful people, and of course the constant use of celebrity plastic surgery as a topic for satire suggest that there is a broad undercurrent of distrust about body modification that places people too far outside a certain norm. This attitude may not always have the highest motives, but were it to gain momentum it would suggest there would not be much toleration for experiments in more radical bodily modification of the sort that the more “free”-spirited transhumanists celebrate.2. Whether or not it has a solid rational basis, lots of people are suspicious of genetically modified (GM) foods and the businesses that produce them. For many foods, having no GM ingredients has become something to advertise. If this resistance grows, it is hard to imagine how people who will not eat a GM corn chip will rush right in and have their prospective progeny genetically tweaked.3. In a similar vein, the problems of in vitro fertilization and allied technologies are getting increasing attention, as evident in The Wall Street Journal excerpting Holly Finn’s The Baby Chase, or the California Independent Film Festival Best Documentary Award going to Eggsploitation, which exposes some of the risks to health and autonomy created by the infertility industry. If all that emerges from this attention is even a more balanced approached to questions of fertility, it will be bad for transhumanism’s wholehearted aspiration to technologize reproduction.4. If Wikipedia is to be believed, cryonics businesses have a hard time staying alive (so to speak), which may have something to do with the fact that the number of people who chose this method of disposing of their bodies is pitifully low. A well-publicized meltdown at a cryonics facility, particularly one that could be linked to financial weakness, might go a long way to putting this genie back in the bottle.5. The imperatives of innovative medical equipment design and academic fashion being what they are, it is not hard to imagine that the current rage for neuropsychological research — which, however premature scientifically, seems to be a good fit in attitude with transhumanist aspirations for “uploading” — will fade away as young, ambitious researchers and inventors seek to make their own marks on the world. Of course, what replaces it may be yet more dogmatically materialistic, but you never know — after all, during the reign of radical behaviorism in the 1950s, who would have predicted that its philosophical vacuity would actually dethrone it in just a few short years?6. Japan supposedly needs robots to care for its aging population, which has spurred a good deal of effort in robotics and AI there. Yet it turns out that the Japanese people are not so fond of the idea of being taken care of by robots after all. Widespread commercial failure, and/or some noteworthy failures in human-robot relations — especially under circumstances of tight national budgets and slow economic growth — could slow research and development in this area and push it in the direction of other technological dead ends, like the Concorde supersonic transport.7. Once upon a time progressives were certain that the direction of history was on the side of universalism and increasingly inclusive human solidarity. For better and for worse, that is hardly obvious today. Should the present climate of global opinion, which has enough trouble extending political and legal recognition to unambiguously human beings, continue, it hardly seems likely to extend the circle of such recognition to nonhumans.I’m not myself a fan of all of the tendencies I have called attention to here, but as a general rule it is important to distinguish between how things are and how one wishes them to be. Otherwise one ends up with a relatively juvenile belief that wishing will make it so. The aura of inevitability that transhumanism likes to cultivate (as says Michael Anissimov: “I will intervene in my own essence. If you try to stop me — good luck.”) is not one of its intellectual strong points, and has almost nothing to do with the real world., which is rife with conflicting possibilities.UPDATE: See a follow-up post here.

There Is No ‘Undo’ Button for the Singularity

As a matter of clearing up the record, I’d like to point out a recent post by Michael Anissimov in which he points out that his blog’s server is still infested with malware. The post concludes:

I don’t know jack about viruses or how they come about. I suppose The New Atlantis will next be using that as evidence that a Singularity will never happen. Oh wait — they already did.

[UPDATE: Mr. Anissimov edited the post without noting it several times, including removing this snarky comment, and apparently, within the last hour or two, deleting the post entirely; see below.]Mr. Anissimov is referring to two posts of mine, “Transhumanist Tech Failures” and “The Disinformation Campaign of Transhumanist ‘Caution’.” But even a passing glance at either of these posts will show that I never used this incident as evidence that the Singularity will never happen. Instead, it should be clear that I used it, rather opportunistically, to point out the embarrassing fact that the hacking of his site ironically reveals the deep foolhardiness of Mr. Anissimov’s aspirations. Shameless, I know.It’s not of mere passing significance that Mr. Anissimov admits here that he “[doesn’t] know jack about viruses or how they come about”! You would think someone who is trying to make his name on being the “responsible” transhumanist, the one who shows up the need to make sure AI is “friendly” instead of “unfriendly,” would realize that, if ever there comes into existence such a thing as unfriendly AI — particularly AI intentionally designed to be malicious — computer viruses will have been its primordial ancestor, or at least its forerunner. Also, you would think he would be not just interested in but actually in possession of a deep and growing knowledge of the practical aspects of artificial intelligence and computer security, those subjects whose mastery are meant to be so vital to our future.I know we Futurisms guys are supposedly Luddites, but (although I prefer to avoid trotting this out) I did in fact graduate from a reputable academic computer science program, and in it studied AI, computer security, and software verification. Anyone who properly understands even the basics of the technical side of these subjects would laugh at the notion of creating highly complex software that is guaranteed to behave in any particular way, particularly a way as sophisticated as being “friendly.” This is why we haven’t figured out how to definitively eradicate incomparably more simple problems — like, for example, ridding malware from servers running simple blogs.The thing is, it’s perfectly fine for Mr. Anissimov or anyone else who is excited by technology not to really know how the technology works. The problem comes in their utter lack of humility — their total failure to recognize that, when one begins to tackle immensely complex “engineering problems” like the human mind, the human body, or the Earth’s biosphere, little errors and tweaks in the mind, gaps in your knowledge that you weren’t even aware of, can translate into chaos and catastrophe when they are actually applied. Reversing an ill-advised alteration to the atmosphere or the human body or anything else isn’t as easy as deleting content from a blog. It’s true that Mr. Anissimov regularly points out the need to act with caution, but that makes it all the more reprehensible that he seems so totally disinclined to actually so act.—Speaking of deleting content from a blog: there was for a while a comment on Mr. Anissimov’s post critical of his swipe at us, and supportive of our approach if not our ideas. But he deleted it (as well as another comment referring to it). He later deleted his own jab at our blog. And sometime in the last hour or two, he deleted the post entirely. All of these changes were done without making any note of them, as if he hopes his bad ideas can just slide down the memory hole.We can only assume that he has seen the error of his ways, and now wants to elevate the debate and stick to fair characterizations of the things we are saying. That’s welcome news, if it’s true. But, to put it mildly, silent censorship is a fraught way to conduct debate. So, for the sake of posterity, we have preserved his post here exactly as it appeared before the changes and its eventual deletion. (You can verify this version for yourself in Yahoo’s cache until it updates.)—A final point of clarification: We here on Futurisms are actually divided on the question of whether the Singularity will happen. I think it’s fair to say that Adam finds many of the broad predictions of transhumanism basically implausible, while Charlie finds many and I find a lot of them at least theoretically possible in some form or another.But one thing we all agree on is that the Singularity is not inevitable — that, in the words of the late computer science professor and artificial intelligence pioneer Joseph Weizenbaum, “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.”Rather, the future is always a matter of human choices; and the point of this blog is that we think the possibility of humans choosing to bring about the Singularity would be a pretty bad one. Why? We’ve discussed that at some length, and we will go on doing so. But a central reason has to be practical: if we can’t keep malware off of a blog, how can we possibly expect to be able to maintain the control we want when our minds, and every aspect of our society, is so subject to the illusion of technical mastery?With that in mind, we have much, much more planned to say in the days, weeks, and months ahead, and we look forward to getting back to a schedule of more frequent posting now that we’re clearing a few major deadlines off our plates.

They wuz robbed

Despite some promising early results, and finishing in 30th place in the online public poll, it looks like Ray Kurzweil did not, after all, make the Time 100 most influential people in the world, which was ultimately selected by the editors to highlight the most influential “artists and activists, reformers and researchers, heads of state and captains of industry. Their ideas spark dialogue and dissent and sometimes even revolution.”

While I can contain my outrage, I have to admit that the result is bizarre given the stated criteria. Kim Jong Un, the done-nothing son of the tyrant of North Korea makes the list, but not Ray Kurzweil? Prince William and Kate Middleton (notably counted as one person on the list) are a couple of cute kids, and I enjoyed watching their wedding, but what original or influential ideas have they had? Patti Smith but not Ray Kurzweil? Amy Poehler but not Ray Kurzweil? Lionel Messi but not Ray Kurzweil?
I’m hard-pressed to explain the result. Is it that transhumanism is not after all winning, let alone won? That those of us interested in it (for or against) are in fact merely patrons of a small and not yet fashionable intellectual boutique? Or is it that transhumanist goals are so mainstream (longer! better! faster!) that the team at Time can’t see them as anything but self-evident truths? Does the truth lie somewhere in between? Or is the list just another example of the sorry results you get when you try to repackage and extend the lifetime of mortal things like once-influential news magazines?

[Royal wedding image via Mashable.]

Transhumanist Inevitability Watch

Transhumanists have a label — “the argument from incredulity” — for one kind of criticism of their visions and predictions: The instinctual but largely un-evidenced assertion that transhumanist claims are simply too fantastical and difficult to fathom and so must be false. While there’s plenty of reason, empirical and otherwise, to doubt transhumanist predictions, they’re certainly right to point out and criticize the prevalence of the argument from incredulity.
But there’s a transhumanist counterpart to the argument from incredulity: the argument from inevitability. This argument is prone to be just as un-evidenced, and at least as morally suspect. So I’d like to begin a new (hopefully regular) series on Futurisms: the Transhumanist Inevitability Watch.

Or are we?

Our first entry comes from transhumanist blogger Michael Anissimov:

It’s 2010, and transhumanism has already won. Billions of people around the world would love to upgrade their bodies, extend their youth, and amplify their powers of perception, thought, and action with the assistance of safe and tested technologies. The urge to be something more, to go beyond, is the norm rather than the exception…. Mainstream culture around the world has already embraced transhumanism and transhumanist ideals.

Well, then! Empirical evidence, maybe?

All we have to do is survive our embryonic stage, stay in control of our own destiny, and expand outwards in every direction at the speed of light. Ray Kurzweil makes this point in The Singularity is Near, a book that was #1 in the Science & Technology section on Amazon and [also appeared] on the NYT bestsellers list for a reason.

Ah. Well, if we’re going to use the bestseller lists as tea leaves, right now Sean Hannity’s Conservative Victory is on the top of the Times list, and Chelsea Handler’s Are You There, Vodka? It’s Me, Chelsea is #2. Does this mean conservatism and alcoholism have also already won?
Similarly, his other major piece of evidence is that it would be “hard for the world to give transhumanism a firmer endorsement” than making Avatar, a “movie about using a brain-computer interface to become what is essentially a transhuman being,” the highest-grossing film of all time. Okay, then surely the fact that the Pirates of the Caribbean and Harry Potter movies occupy five of the other top 10 spots means even firmer endorsements of pirates and wizards, no? And actually, Avatar only ranks 14th in inflation-adjusted dollars in the U.S. market, far behind the highest-grossing film, which, of course, is Gone with the Wind — unassailable evidence that sexy blue aliens aren’t nearly as “in” as corsets and the Confederacy, right?
Mr. Anissimov’s post at least contains his usual sobriety and caution about the potentially disastrous effects of transhumanism on safety and security. But he and other transhumanists would do well to heed the words of artificial intelligence pioneer Joseph Weizenbaum in his 1976 Computer Power and Human Reason:

The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.

Keep Weizenbaum’s words in mind as we continue the Inevitability Watch. Humanity’s future is always a matter of human choice and responsibility.
UPDATE: Here’s another good example from Anissimov:

Transhumanist issues are obscenely mainstream nowadays, who even cares. We’re not even edgy anymore. The excitement is over. It’s time to start racing towards a safe intelligence explosion so we can end the Human-only Era once and for all. Let’s just get it over with.