Toward a Typology of Transhumanism

Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”
Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

Beware Responsible Discourse

I’m not sayin’, I’m just sayin’.

Another day, another cartoon supervillain proposal from the Oxford Uehiro “practical” “ethicists”: use biotech to lengthen criminals’ lifespans, or tinker with their minds, to make them experience greater amounts of punishment. (The proposal actually dates from August, but has been getting renewed attention owing to a recent Aeon interview with its author, Rebecca Roache.) Score one for our better angels. The original post, which opens with a real-world case of parents who horrifically abused and killed their child, uses language like this:

…[the parents] will each serve a minimum of thirty years in prison. This is the most severe punishment available in the current UK legal system. Even so, in a case like this, it seems almost laughably inadequate…. Compared to the brutality they inflicted on vulnerable and defenceless Daniel, [legally mandated humane treatment and eventual release from prison] seems like a walk in the park. What can be done about this? How can we ensure that those who commit crimes of this magnitude are sufficiently punished?…

[Using mind uploads,] the vilest criminals could serve a millennium of hard labour and return fully rehabilitated either to the real world … or, perhaps, to exile in a computer simulated world.

….research on subjective experience of duration could inform the design and management of prisons, with the worst criminals being sent to special institutions designed to ensure their sentences pass as slowly and monotonously as possible. 

The post neither raises, suggests, nor gives passing nod to a single ethical objection to these proposals. When someone on Twitter asks Ms. Roache, in response to the Aeon interview, how she could endorse these ideas, she responds, “Read the next paragraph in the interview, where I reject the idea of extending lifespan in order to inflict punishment!” Here’s that rejection:

But I soon realised it’s not that simple. In the US, for instance, the vast majority of people on death row appeal to have their sentences reduced to life imprisonment. That suggests that a quick stint in prison followed by death is seen as a worse fate than a long prison sentence. And so, if you extend the life of a prisoner to give them a longer sentence, you might end up giving them a more lenient punishment.

Oh. So, set aside the convoluted logic here (a death sentence is worse than a long prison sentence … so therefore a longer prison sentence is more lenient than a shorter one? huh?): to the marginal extent Ms. Roache is rejecting her own idea here, it’s because extending prisoners’ lives to punish them longer might be letting them off easier than putting them to death.


Ms. Roache — who thought up this idea, announced it, goes into great detail about the reasons we should do it and offers only cursory, practical mentions of why we shouldn’t — tries to frame this all as a mere disinterested discussion aimed at proactive hypothetical management:

It’s important to assess the ethics *before* the technology is available (which is what we’re doing).” 

There’s a difference between considering the ethics of an idea and endorsing it.” 

… people sometimes have a hard time telling the difference between considering an idea and believing in it …

I don’t endorse those punishments, but it’s good to explore the ideas (before a politician does).

What constitutes humane treatment in the context of the suggestions I have made is, of course, debatable. But I believe it is an issue worth debating.

So: rhetoric strong enough to make a gulag warden froth at the mouth amounts to merely “considering” and “exploring” and “debating” and “assessing” new punitive proposals. In response to my tweet about this…

…a colleague who doesn’t usually work in this quirky area of the futurist neighborhood asked me if I was suggesting that Ms. Roache should have just sat on this idea, hoping nobody else would ever think of it (of course, sci-fi has long since beaten her to the punch, or at least the punch’s, um, ballpark). This is, of course, a vital question, and my response was something along these lines: In the abstract, yes, potential future tech applications should be discussed in advance, particularly the more dangerous and troubling ones.

But please. How often do transhumanists, particularly the Oxford Uehiro crowd, use this move? It’s the same from doping the populace to be more moral, to shrinking people so they’ll emit less carbon, to “after-birth abortion,” and on on: Imagine some of the most coercive and terrible things we could do with biotech, offer all the arguments for why we should and pretty much none for why we shouldn’t, make it sound like this would be technically straightforward, predictable, and controllable once a few advances are in place, and finally claim that you’re just being neutral and academically disinterested; that, like Glenn Beck on birth certificates, you’re just asking questions, because after all, someone will, and better it be us Thoughtful folks who take the lead on Managing This Responsibly, or else someone might try something crazy.

Who knows how much transhumanists buy their own line — whether this is as cynical a media ploy as it seems, or if they’re under their own rhetorical spell. But let’s be frank about the work these discussions are really doing, how they’re aiming to shape the parameters of discourse and so thought and so action. Like Herman Kahn and megadeath, when transhumanists claim to be responsibly shining a light on a hidden path down which we might otherwise blindly stumble, what they’re really after is focusing us so intently on this path that we forget we could yet still take another.

Our Chemical Romance

A love pill

Julian Savulescu and Anders Sandberg have an article in the New Scientist (subscription required) talking about how we need to start taking control of romantic love by pharmaceutically enhancing marriages. By infusing our brains with neurotransmitters like vasopressin and oxytocin, we may be able to “tweak” these neurochemical systems “to create a longer-lasting love” as a way to curb rising divorce rates.

The basic logic underlying their argument is the same as Savulescu’s case for biomedically enhancing human morality: just as evolution did not make us “fit for the future,” so also it did not make us “fit for love.” Once again, our current problems are explained by “discrepancies between our adaptation to a past environment” — what they call the “environment of evolutionary adaptiveness” (EEA) — and “our current existence.”

Savulescu and Sandberg’s specific evolutionary argument in this case is that in the EEA, “people survived for a maximum of 35 years,” so that genes predisposing people to stay married longer than roughly 15 years would not have been selected for, since most marriages would end with death before then anyway. It would seem that we are outliving our natural capacity to love, with the current median duration of marriage being 11 years — “surprisingly close” to the 15 years that we could have expected to live in marital bliss in the EEA.

However, in an article on the topic they wrote in 2008, they observe that “divorce rates peak among younger couples, declining with age,” with the highest rates of divorce found for men and women aged 25-29. If natural selection disfavored the kinds of marriages that did not occur in the EEA, then why are people who are older than the historical “maximum of 35 years” apparently so much better at staying married? Shouldn’t the divorce rate continue to climb as people age past that evolutionary point and their marriages drag on past the typical duration they would have had in the EEA?

Furthermore, how does this evolutionary explanation account for the precipitous rise in the divorce rate since in the 1970s? Or the fact that, among college-educated women, the divorce rate has since returned to the levels seen before this rise? However our evolutionary heritage has affected our contemporary pair-bonding practices, there are many other factors at play here that make the evolutionary forces difficult to discern on their own. Savulescu and Sandberg’s particular evolutionary hypothesis, plausible though it initially sounds, doesn’t hold up to the actual evidence, and doesn’t help to explain human marriage trends and behaviors.

It is worth pointing out something that Savulescu and Sandberg had right in their 2008 paper, though: while they rightly acknowledge the importance of marriage as a social institution for parenting, they generally focus their analysis of the value of marriage on love — the formation of an interpersonal sexual and emotional relationship — rather than on theories that see the value of marriage simply in terms of economic or social utility. However, they end up distorting the meaning and importance of love by crudely reducing it to a biological phenomenon; as important as love is for human pair-bonding, it is not the sort of phenomenon that is easily amenable to scientific study — which greatly undermines the case for technologically manipulating it.

Savulescu and Sandberg begin their argument on the ethics of “love drugs” by combining a crudely reductionist approach with a familiar transhumanist trope—that their radical biotechnological scheme is actually “consistent” with what we have been doing all along:

There is a long history to the use of love potions. Alcohol is the commonest love drug. We have always tried to use chemistry to influence the chemistry between people. Neurolove potions will just be more effective. There is no morally relevant difference between marriage therapy, a massage, a glass of wine, a fancy pink [sic], steamy potion and a pill. All act at the biological level to make the release of substances like oxytocin and dopamine more likely.

Assuming this is true, would the fact that these disparate romantic activities all increase the likelihood of oxytocin and dopamine being released mean that there are no morally relevant differences between them? Perhaps if it were true that these activities could really all be understood as essentially acting “on the biological level” in the same way, then there might not be a reason to see any moral difference between them. But it is obvious that marriage therapy does not “act at the biological level” in the same way that a dopaminergic pill does; in the first case, insofar as the activity of talking about your relationship leads to the release of neurotransmitters like dopamine, it does so by a complex process of dealing with the real obstacles that impede love, and reminding the couple of the real qualities that make one another lovable — all of which allow for the natural emotional responses associated with love, which do indeed tend to correspond with the release of these neurotransmitters. Pills and “neurolove potions” on the other hand, insofar as they are effective, would start with the release of these neurotransmitters, causing the person to feel the emotional responses associated with love, but not in any direct connection with any of causes that might make these responses meaningful and true — namely, an actual, loving relationship with another person.

The President’s Council on Bioethics addressed this to some extent in Beyond Therapy when they argued that “drug-induced ‘love’ is not just incomplete — an emotion unconnected with knowledge of and care for the beloved. It is also unfounded, not based on anything — not even visible beauty — from which such emotions normally grow.” Savulescu and Sandberg argue that these objections might apply to the inducing of new relationships, but would not to apply to established relationships. That may be partly true: most people would probably find it worse to establish a relationship through drug-induced emotions than it would be to maintain an existing relationship. But similar objections still apply to the latter: severing the connection between the emotion of love and its proper object could still threaten to make a relationship detached from the real circumstances on which such emotions are normally sustained. And if an established relationship has the kinds of problems that would require a “neurolove potion” to keep it going, then maybe it is those problems themselves that need to be addressed. For instance, Savulescu and Sandberg argue that it is basically a good idea for a woman to take love drugs in order to tolerate her husband’s infidelity. Is that really a prescription for personal and moral progress?

Psychopharmaceuticals surely have an important role to play in enabling people with clinical depression and other mood disorders to live well and pursue their happiness; but they become an odious and dangerous tool when they are used as a way to avoid dealing with real problems in the real world.

Forcing People to Be Good

[Editor’s Note: We are pleased to introduce Brendan Foht, the new assistant editor of The New Atlantis. He holds degrees in political science from the University of Calgary and in biology from the University of Alberta. This is his first post for Futurisms, to which he will be a regular contributor.]

Peter Singer, along with researcher Agata Sagan, recently made an appearance on the philosophy blog of the New York Times. Suggesting the need for a “morality pill” that could boost human ethical behavior, Singer reminds us why he is the king of crass consequentialism:

Might governments begin screening people to discover those most likely to commit crimes? Those who are at much greater risk of committing a crime might be offered the morality pill; if they refused, they might be required to wear a tracking device that would show where they had been at any given time, so that they would know that if they did commit a crime, they would be detected.

As long as we’re asking people to take morality pills, we might as well preemptively implant those we deem pre-criminals with tracking devices, right?
Singer’s ideas about moral enhancement, however, pale in comparison to those of Julian Savulescu, who drops even the rhetorical semblance of doubt as to whether moral enhancements ought to be compulsory. Indeed, he seems to believe that without the development of genetic or other biomedical methods for moral enhancement, the human race is doomed to extinction.
Savulescu, a professor at the Oxford Uehiro Centre for Practical Ethics, and one of the most prominent academic advocates of human biological enhancement, has argued that the human race is “unfit for the future,” and is heading into a “Bermuda Triangle of Extinction.” The three points of this triangle (representing the three factors pulling us toward extinction) consist of our rapidly advancing technological and scientific power, the evolutionary origins of our moral nature, and our commitment to liberal democracy.
The moral nature we received from our ancestors is far from perfect, rooted as it is in a world of supposedly violent and xenophobic cavemen. With the development and dispersal of powerful new technologies, it is becoming increasingly likely that powerful weapons, like genetically engineered super-plagues, might end up in the hands of people whose moral nature disposes them to violent, possibly catastrophic acts. Liberal democracy is represented in the triangle because it prevents us from taking the measures necessary to ensure the survival of the human race — measures like compulsory moral enhancement.

The idea of using genetic engineering as a measure to secure global security or peace is, hopefully needless to say, totally removed from medical, scientific, and political realities — not to mention from basic ethical and practical concerns. The idea of actually implementing such a scheme, effectively and successfully, is laughable.
Since facts don’t play much of a role in these proposals, consider just one small bit of relevant data. In Afghanistan — a country that would be high on the list as a potential source of troublesome weapons or people — the infant mortality rate in 2009 was over 13%, and one in five children died before the age of five. Even from a purely practical standpoint, are we to take seriously the idea of going into country that lags a century behind today’s medical standards, and undertaking a massive program of chemical or genetic manipulation, using techniques that are as of now barely hypothetical, targeting genes that we have barely begun to identify, on “patients” who are unlikely to understand the procedures, and in any case will almost certainly be coerced into them?
While it is true that our moral dispositions are to some extent rooted in our biology, our moral and political actions are rooted at least as much in our beliefs about justice and injustice as in our innate dispositions. And one would think that just about any society would not take kindly to an attempt to violate its members’ bodily autonomy. Even if the technical and medical problems were somehow miraculously solved, the fact that some state or international agency would have to force people to take these “moral enhancements” — as Savulescu notes, those who most “need” them would be the least likely to take them voluntarily — would create a backlash that would almost surely inspire more violence than the intervention could possibly prevent.

The apparent failure of transhumanists to recognize the basic political problems with such a scheme makes plain some of the lapses in their understanding of human nature. Savulescu’s argument that human beings are “unfit for the future” reflects an anxiety common among many people — not just transhumanists — who think about how messy and imperfect our biological nature can be. Evolutionary biology seems to show us that our bodies were designed to compete in a vicious, pre-historical struggle, burdening us with desires and vices that conflict with our higher longings and our moral values.
But this insight is of course not new; Plato and the authors of Genesis seemed to have some notion that human nature is prone to bad as much as good, and common sense shows that we are not always as good as we would like to be.
The difference between transhumanists and more serious ethical traditions is that transhumanists think that because nature is not perfectly designed, it is completely up for grabs — while others acknowledge that ethics is about learning the best way to live with our natural imperfections. In this sense, trying to eliminate the aspects of our nature we don’t like would not be a moral “enhancement,” but would rather be a profound change in the meaning of a moral human life.