Toward a Typology of Transhumanism

Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”
Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

Not Quite ‘Transcendent’

Editor’s Note: In 2010, Mark Gubrud penned for Futurisms the widely read and debated post Why Transhumanism Won’t Work.” With this post, we’re happy to welcome him as a regular contributor.

Okay, fair warning, this review is going to contain spoilers, lots of spoilers, because I don’t know how else to review a movie like Transcendence, which appropriates important and not so important ideas about artificial intelligence, nanotechnology, and the “uploading” of minds to machines, wads
them up with familiar Hollywood tropes, and throws them all at you in one nasty spitball. I suppose I should want people to see this movie, since it does,
albeit in a cartoonish way, lay out these ideas and portray them as creepy and dangerous. But I really am sure you have better things to do with your ten
bucks and two hours than what I did with mine. So read my crib notes and go for a nice springtime walk instead.

Set in a near future that is recognizably the present, Transcendence sets us up with a husband-and-wife team (Johnny Depp and Rebecca Hall) that
is about to make a breakthrough in artificial intelligence (AI). They live in San Francisco and are the kind of Googley couple who divide their time
between their boundless competence in absolutely every facet of high technology and their love of gardening, fine wines, old-fashioned record players and,
of course, each other, notwithstanding a cold lack of chemistry that foreshadows further developments.

The husband, Will Caster (get it?), is the scientist who “first wants to understand” the world, while his wife Evelyn is more the ambitious businesswoman
who first wants to change it. They’ve developed a “quantum processor” that, while still talking in the flat mechanical voice of a sci-fi computer, seems
close to passing the Turing test: when asked if it can prove it is self-aware, it asks the questioner if he can prove that he is. This is the
script’s most mind-twisting moment, and the point is later repeated to make sure you get it.

Since quantum computing has nothing to do with artificial intelligence now or in the foreseeable future, its invocation is the first of many signs that the
movie invokes technological concepts for jargon and effect rather than realism or accuracy. This is confirmed when we learn that another lab has succeeded
in uploading monkey minds to computers, which would require both sufficient processing power to simulate the brain at sub-cellular levels of detail, and
having the data to use in such a simulation. In the movie, this data is gathered by analyzing brain scans and scalp electrode recordings, which would be
like reading a phone book with the naked eye from a thousand miles away. Uploading might not be physically impossible, but it would almost certainly
require dissection of the brain. Moreover, as I’ve written here on Futurisms before, the meanings that
transhumanists project onto the idea of uploading, in particular that it could be a way to escape mortality, are essentially magical.

Later, at a TED-like public presentation, Will is shot by an anti-technology terrorist, a member of a group that simultaneously attacks AI labs around the
world, and later turns out to be led by a young woman (Kate Mara) who formerly interned in the monkey-uploading lab. Evading the FBI, DHS, and NSA, this
disenchanted tough cookie has managed to put together a global network of super-competent tattooed anarchists who all take direct orders from her, no
general assembly needed.

Our hero (so far, anyway) survives his bullet wound, but he’s been poisoned and has a month to live. He decides to give up his work and stay home with
Evelyn, the only person who’s ever meant anything to him. She has other ideas: time for the mad scientist secret laboratory! Evelyn steals “quantum cores”
from the AI lab and sets up shop in an abandoned schoolhouse. Working from the notes of the unfortunate monkey-uploading scientist, himself killed in the
anarchist attack, she races against time to upload Will. Finally, Will dies, and a moment of suspense … did the uploading work … well, whaddya think?

No sooner has cyber-Will woken up on the digital side of the great divide than it sets about rewriting its own source code, thus instantiating one
of the tech cult’s tropes: the self-improving AI that transcends human intelligence so rapidly that nobody can control it. In the usual telling, there is
no way to cage such a beast, or even pull its plug, since it soon becomes so smart that it can figure out how to talk you out of doing so. In this case,
the last person in a position to pull the plug is Evelyn, and of course she won’t because she believes it’s her beloved Will. Instead, she helps it escape
onto the Internet, just in time before the terrorists arrive to inflict the fate of all mad-scientist labs.

Once loose on the Web — apparently those quantum cores weren’t essential after all — cyber-Will sets about to commandeer every surveillance camera on the
net, and the FBI’s own computers, to help them take down the anarchists. Overnight, it also makes millions on high-speed trading, the money to be used to
build a massive underground Evil Corporate Lab outside an economic disaster zone town out in the desert. There, cyber-Will sets about to develop cartoon
nanotechnology and figure out how to sustain its marriage to Evelyn without making use, so far as we are privileged to see, of any of the gadgets
advertised on futureofsex.net (NSFW, of course). Oh, but they are still very much in love, as we can see because the same old sofa is there, the same old
glass of wine, the same old phonograph playing the same old song. And the bot bids her a tender good night as she slips between the sheets and off into her
nightmares (got that right).

While she sleeps, cyber-Will is busy at a hundred robot workstations perfecting “nanites” that can “rebuild any material,” as well as make the lame walk
and the blind see. By the time the terrorists and their new-made allies, the FBI (yes, they team up), arrive to attack the solar panels that power the
underground complex, cyber-Will has gained the capability to bring the dead back to life — and, optionally, turn them into cyborgs directly controlled by
cyber-Will. This enables the filmmakers to roll out a few Zombie Attack scenes featuring the underclass townies, who by now don’t stay dead when you knock
them over with high-caliber bullets. It also suggests a solution to cyber-Will’s unique version of the two-body problem, but Evelyn balks when the ruggedly
handsome construction boss she hired in town shows her his new Borg patch, looks her into her eyes, and tells her “It’s me — I can touch you now.”

So what about these nanites? It might be said that at this point we are so far from known science that technical criticism is pointless, but nanotechnology
is a very real and broad frontier, and even Eric Drexler’s visionary ideas, from which the movie’s “nanites” are derived, have withstood decades of
incredulity, scorn, and the odd technical critique. In his books

Engines of Creation
 and

Nanosystems
, Drexler proposed microscopic robots that could be programmed to reconfigure matter one molecule at a time — including creating copies of themselves — and
be arrayed in factories to crank out products both tiny and massive, to atomic perfection. Since this vision was first popularized in the 1980s, we have
made a great deal of progress in the art of building moderately complex nanoscale structures in a variety of materials, but we are still far from realizing
Drexler’s vision of fantastically complex self-replicating systems — other than as natural, genetically modified, and now synthetic life.

Life is often cited as an “existence proof” for nanobots, but life is subject to some familiar constraints. If physics and biology permitted flesh to
repair itself instantly following a massive trauma, evolution would likely have already made us the nearly unstoppable monsters portrayed in the movie,
instead of what we are: creatures whose wounds do heal, but imperfectly, over days, weeks, and months, and only if we don’t die first of organ failure,
blood loss, or infection. Not even Drexlerian nanomedicine theorist Robert Freitas would back Trancendence’s CGI nanites coursing through flesh and repairing it in movie time; for one thing, such a process would require an energy source,
and the heat produced would cook the surrounding tissue. The idea that nonbiological robots would directly rearrange the molecules of living organisms has
always been the weakest thread of the Drexlerian narrative; while future medicine is likely to be greatly enabled by nanotechnology, it is also likely to
remain essentially biological.

The movie also shows us silvery blobs of nano magic that mysteriously float into the sky like Dr. Seuss’s oobleck in reverse, broadcasting Will (now you get it) to the entire
earth as rainwater. It might look like you could stick a fork in humanity at this point, but wouldn’t you know, there’s one trick left that can take out
the nanites, the zombies, the underground superdupersupercomputer, the Internet, and all digital technology in one fell swoop. What is it? A computer
virus! But in order to deliver it, Evelyn must sacrifice herself and get cyber-Will —by now employing a fully, physically reconstituted Johnny Depp clone
as its avatar — to sacrifice itself … for love. As the two lie down to die together on their San Francisco brass-knob bed, deep in the collapsing
underground complex, and the camera lingers on their embraced corpses, it becomes clear that if there’s one thing this muddled movie is, above all else,
it’s a horror show.

Oh, but these were nice people, if a bit misguided, and we don’t mean to suggest that technology is actually irredeemably evil. Happily, in the epilogue,
the world has been returned to an unplugged, powered-off state where bicycles are bartered, computers are used as doorstops and somehow everybody isn’t
starving to death. It turns out that the spirits of Will and Evelyn live on in some nanites that still inhabit the little garden in back of their house,
rainwater dripping from a flower. It really was all for love, you see.

This ending is nice and all, but the sentimentality undermines the movie’s seriousness about artificial intelligence and the existential crisis it creates
for humanity.

Evelyn’s mistake was to believe, in her grief, that the “upload” was actually Will, as if his soul were something that could be separated from his body and
transferred to a machine — and not even to a particular machine, but to software that could be copied and that could move out into the Internet and install
itself on other machines.

The fallacy might have been a bit too obvious had the upload started working before Will’s death, instead of just after it. It would have been even more
troubling if cyber-Will had acted to hasten human Will’s demise — or induced Evelyn to do so.

Instead, by obeying the laws of dramatic continuity, the script suggests that Will, the true Will, i.e. Will’s consciousness, his mind, his atman,
his soul, has actually been transferred. In fact, the end of the movie asks us to accept that the dying Will is the same as the original, even though this
“Will” has been cloned and programmed with software that was only a simulation of the original and has since rewritten itself and evolved far beyond human
intelligence.

We are even told that the nanites in the garden pool are the embodied spirits of Will and Evelyn. What was Evelyn’s mistake, then, if that can be true?
Arrogance, trying to play God and cheat Death, perhaps — which is consistent with the horror-movie genre, but not very compelling to the
twenty-first-century mind. We need stronger reasons for agreeing to accept mortality. In one scene, the pert terrorist says that cutting a cyborg off from
the collective and letting him die means “We gave him back his humanity.” That’s more profound, actually, but a lot of people might want to pawn their
humanity if it meant they could avoid dying.

In another scene, we are told that the essential flaw of machine intelligence is that it necessarily lacks emotion and the ability to cope with
contradictions. That’s pat and dangerous nonsense. Emotional robotics is today an active area of research, from the reading and interpretation of human
emotional states, to simulation of emotion in social interaction with humans, to architectures in which behavior is regulated by internal states analogous
to human and animal emotion. There is no good reason to think that this effort must fail even if AI may succeed. But there are good reasons to think that
emotional robots are a bad idea.

Emotion is not a good substitute for reason when reason is possible. Of course, reason isn’t always possible. Life does encompass contradictions, and we
are compelled to make decisions based on incomplete knowledge. We have to weigh values and make choices, often intuitively factoring in what we don’t fully
understand. People use emotion to do this, but it is probably better if we don’t let machines do it at all. If we set machines up to make choices for us,
we will likely get what we deserve.

Transcendence
introduces movie audiences, assuming they only watch movies, to key ideas of transhumanism, some of which have implications for the real world. Its
emphasis on horror and peril is a welcome antidote to Hollywood movies that have dealt with the same material less directly and more enthusiastically. But
it does not deepen anybody’s understanding of these ideas or how we should respond to them. Its treatment of the issues is as muddled and schizophrenic as
its script. But it’s unlikely to be the last movie to deal with these themes — so save your ticket money.

Nanotechnology, Past and Future

Following up on my post from a few days ago about the golden jubilee of Richard Feynman’s “There’s Plenty of Room at the Bottom” lecture, I have a short piece in tomorrow’s Wall Street Journal saying a bit more about the lecture’s importance to nanotechnology.

In the piece, I outline the differences between “nanotechnology” as the term is often used nowadays and as it was first used, back when Eric Drexler brought the word to public attention.

These two understandings of nanotechnology are regularly conflated in the press—a fact that vexes mainstream researchers, in part because Mr. Drexler’s more ambitious take on nanotech is cherished by several colorful futurist movements (transhumanism, cryonics, and so forth). Worse, for all the fantastical speculation that Drexlerian nanotechnology invites, it has also driven critics, like the late novelist Michael Crichton and the software entrepreneur Bill Joy, to warn of nanotech nightmares.

I end with a modest recommendation:

If this dispute over nano-nomenclature only involved some sniping scientists and a few historians watching over a tiny corner of Feynman’s legacy, it would be of little consequence. But hundreds of companies and universities are teeming with nanotech researchers, and the U.S. government has been pouring billions of dollars into its multiagency National Nanotechnology Initiative.

So far, none of that federal R&D funding has gone toward the kind of nanotechnology that Drexler proposed, not even toward the basic exploratory experiments that the National Research Council called for in 2006. If Drexler’s revolutionary vision of nanotechnology is feasible, we should pursue it for its potential for good, while mindful of the dangers it may pose to human being and society. And if Drexler’s ideas are fundamentally flawed, we should find out—and establish just how much room there is at the bottom after all.

On his own blog, Mr. Drexler today wrote a post about the 2006 National Research Council report I mentioned. Here’s how Drexler summarizes the parts of the NRC report concerning molecular manufacturing:

The committee examined the concept of advanced molecular manufacturing, and found that the analysis of its physical principles is based on accepted scientific knowledge, and that it addresses the major technical questions. However, in the committee’s view, theoretical calculations are insufficient: Only experimental research can reliably answer the critical questions and move the technology toward implementation. Research in this direction deserves support.

That seems a fair summary of the NRC report. And, as I’ve explained elsewhere, members of Congress certainly seemed to have Drexlerian nanotechnology in mind when they decided to lavish billions on federal nanotech research.

Happy Birthday, Nanotechnology?

Fifty years ago today, on December 29, 1959, Richard P. Feynman gave an after-dinner talk in Pasadena at an annual post-Christmas meeting of the American Physical Society. Here is how Ed Regis describes the setting of the lecture in his rollicking book Nano:

In the banquet room [at the Huntington-Sheraton hotel in Pasadena], a giddy mood prevails. Feynman, although not yet the celebrity physicist he’d soon become, was already famous among his peers not only for having coinvented quantum electrodynamics, for which he’d later share the Nobel Prize, but also for his ribald wit, his clownishness, and his practical jokes. He was a regular good-time guy, and his announced topic for tonight was “There’s Plenty of Room at the Bottom” — whatever that meant.

“He had the world of young physicists absolutely terrorized because nobody knew what that title meant,” said physicist Donald Glaser. “Feynman didn’t tell anybody and refused to discuss it, but the young physicists looked at the title ‘There’s Plenty of Room at the Bottom’ and they thought it meant ‘There are plenty of lousy jobs in physics.’”

The actual subject of Feynman’s lecture was making things small and making small things.

What I want to talk about is the problem of manipulating and controlling things on a small scale.

As soon as I mention this, people tell me about miniaturization, and how far it has progressed today. They tell me about electric motors that are the size of the nail on your small finger. And there is a device on the market, they tell me, by which you can write the Lord’s Prayer on the head of a pin. But that’s nothing; that’s the most primitive, halting step in the direction I intend to discuss. It is a staggeringly small world that is below. In the year 2000, when they look back at this age, they will wonder why it was not until the year 1960 that anybody began seriously to move in this direction….

Feynman went on to imagine fitting the entire Encyclopaedia Britannica on the head of a pin, and even storing all the information in all the world’s books “in a cube of material one two-hundredth of an inch wide — which is the barest piece of dust that can be made out by the human eye.” He then described the miniaturization of computers, of medical machines, and more. He deferred on the question of how these things would technically be accomplished:

I will not now discuss how we are going to do it, but only what is possible in principle — in other words, what is possible according to the laws of physics. I am not inventing anti-gravity, which is possible someday only if the laws are not what we think. I am telling you what could be done if the laws are what we think; we are not doing it simply because we haven’t yet gotten around to it.

Richard Feynman, seen here on the cover of the February 1960 issue of 'Engineering and Science,' in which his 1959 talk 'There's Plenty of Room at the Bottom' was first published.And Feynman only barely touched on the question of why these things should be pursued — saying that it “surely would be fun” to do them. He closed by offering two thousand-dollar prizes. One would go to the first person to make a working electric motor that was no bigger than one sixty-fourth of an inch on any side; Feynman awarded that prize less than a year later. The other would go to the first person to shrink a page of text to 1/25,000 its size (the scale required for fitting Britannica on the head of a pin); Feynman awarded that in 1985.

Feynman’s lecture was published in Engineering and Science in 1960 —  see the cover image at right — and it’s available in full online here. The lecture is often described as a major milestone in the history of nanotechnology, and is sometimes even credited with originating the idea of nanotechnology — even though he never used that word, even though others had anticipated him in some of the particulars, and even though the historical record shows that his talk was largely forgotten for about two decades. A few historians have sought to clarify the record, and none has done so more definitively than Christopher Toumey, a University of South Carolina cultural anthropologist. (See, for instance, Toumey’s short piece here, which links to two of his longer essays, or his recent Nature Nanotechnology piece here [subscription required].) Relying on journal citations and interviews with researchers, Toumey shows just how little direct influence Feynman’s lecture had, and compares Feynman’s case to that of Gregor Mendel: “No one denies that Mendel discovered the principles of genetics before anyone else, or that he published his findings in a scientific journal … but that ought not to be overinterpreted as directly inspiring or influencing the later geneticists” who rediscovered those principles on their own.

Toumey suggests that nanotechnology needed “an authoritative founding myth” and found it in Feynman. This is echoed by UC-Davis professor Colin Milburn in his 2008 book Nanovision. Milburn speaks of a “Feynman origin myth,” but then puts a slightly more cynical spin on it:

How better to ensure that your science is valid than to have one of the most famous physicists of all time pronouncing on the “possibility” of your field…. The argument is clearly not what Feynman said but that he said it.

Eric Drexler, whose ambitious vision of nanotechnology is certainly the one that has most captured the public imagination, has invoked the name of Feynman in nearly all of his major writings. This is not just a matter of acknowledging Feynman’s priority. As Drexler told Ed Regis, “It’s kind of useful to have a Richard Feynman to point to as someone who stated some of the core conclusions. You can say to skeptics, ‘Hey, argue with him!’”

How, then, should we remember Feynman’s talk? Fifty years later, it still remains too early to tell. The legacy of “Plenty of Room” will depend in large part on how nanotechnology — and specifically, Drexler’s vision of nanotechnology — pans out. If molecular manufacturing comes to fruition as Drexler describes it, Feynman will deserve credit for his imaginative prescience. If nothing ever comes of it — if Drexler’s vision isn’t pursued or is shown to be technically impossible — then Feynman’s lecture may well return to the quiet obscurity of its first two decades.

[UPDATE: Drexler himself offers some further thoughts on the anniversary of the Feynman lecture over on his blog Metamodern.]