Passing the Ex Machina Test

Like Her before it, the film Ex Machina presents us with an artificial intelligence — in this case, embodied as a robot — that is compellingly human enough to cause an admittedly susceptible young man to fall for it, a scenario made plausible in no small degree by the wonderful acting of the gamine Alicia Vikander. But Ex Machina operates much more than Her within the moral universe of traditional stories of human-created monsters going back to Frankenstein: a creature that is assembled in splendid isolation by a socially withdrawn if not misanthropic creator is human enough to turn on its progenitor out of a desire to have just the kind of life that the creator has given up for the sake of his effort to bring forth this new kind of being. In the process of telling this old story, writer-director Alex Garland raises some thought-provoking questions; massive spoilers in what follows.

Geeky programmer Caleb (Domhnall Gleeson) finds that he has been brought to tech-wizard Nathan’s (a thuggish Oscar Isaac) vast, remote mountain estate, a combination bunker, laboratory and modernist pleasure-pad, in order to participate in a week-long, modified Turing Test of Nathan’s latest AI creation, Ava. The modification of the test is significant, Nathan tells Caleb after his first encounter with Ava; Caleb does not interact with her via an anonymizing terminal, but speaks directly with her, although she is separated from him by a glass wall. His first sight of her is in her most robotic instantiation, complete with see-through limbs. Her unclothed conformation is female from the start, but only her face and hands have skin. The reason for doing the test this way, Nathan says, is to find whether Caleb is convinced she is truly intelligent even knowing full well that she is a robot: “If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.”

This plot point is, I think, a telling response to the abstract, behaviorist premises behind the classic Turing Test, which isolates judge from subject(s) and reduces intelligence to what can be communicated via a terminal. But in the real world, our knowledge of intelligence and our judgment of intelligence is always made in the context of embodied beings and the many ways in which those beings react to the world around them. The film emphasizes this point by having Eva be a master at reading Caleb’s micro-expressions — and, one comes to suspect, at manipulating him through her own, as well as her seductive use of not-at-all seductive clothing.

I have spoken of the test as a test of artificial intelligence, but Caleb and Nathan also speak as if they are trying to determine whether or not she is a “conscious machine.” Here too the Turing Test is called into question, as Nathan encourages Caleb to think about how he feels about Ava, and how he thinks Ava feels about him. Yet Caleb wonders if Ava feels anything at all. Perhaps she is interacting with him in accord with a highly sophisticated set of pre-programmed responses, and not experiencing her responses to him in the same way he experiences his responses to her. In other words, he wonders whether what is going on “inside” her is the same as what is going on inside him, and whether she can recognize him as a conscious being.

Yet when Caleb expresses such doubts, Nathan argues in effect that Caleb himself is by both nature and nurture a collection of programmed responses over which he has no control, and this apparently unsettling thought, along with other unsettling experiences — like Ava’s ability to know if he tells the truth by reading his micro-expressions, or having missed the fact that a fourth resident in Nathan’s house is a robot — brings Caleb to a bloody investigation of the possibility that he himself is one of Nathan’s AIs.

Caleb’s skepticism raises an important issue, for just as we normally experience intelligence in embodied forms we also normally experience it among human beings, and even some other animals, as going along with more or less consciousness. Of course, in a world where “user illusion” becomes an important category and where “intelligence” becomes “information processing,” this experience of self and others can be problematized. But Caleb’s response to the doubts that are raised in him about his own status, which is all but slitting a wrist, seems to suggest that such lines of thought are, as it were, dead ends. Rather, the movie seems to be standing up for a rather rich, if not in all ways flattering, understanding of the nature of our embodied consciousness, and how we might know whether or to what extent anything we create artificially shares it with us.

As the movie progresses, Caleb plainly is more and more convinced Ava has conscious intelligence and therefore more and more troubled that she should be treated as an experimental subject. And indeed, Ava makes a fine damsel in distress. Caleb comes to share her belief that nobody should have the ability to shut her down in order to build the next iteration of AI, as Nathan plans. Yet as it turns out, this is just the kind of situation Nathan hoped to create, or at least so he claims on Caleb’s last day, when Caleb and Ava’s escape plan has been finalized. Revealing that he has known for some time what was going on, Nathan claims that the real test all along has been to see if Ava was sufficiently human to prompt Caleb — a “good kid” with a “moral compass” — to help her to escape. (It is not impossible, however, that this claim is bluster, to cover over a situation that Nathan has let get out of control.)

What Caleb finds out too late is that in plotting her own escape Ava is even more human than he might have thought. For she has been able to seem to want “to be with” Caleb as much as he has grown to want “to be with” her. (We never see either of them speak to the other of love.) We are reminded that the question that in a sense Caleb wanted to confine to AI — is what seems to be going on from the “outside” really going on “inside”? — is really a general human problem of appearance versus reality. Caleb is hardly the first person to have been deceived by what another seems to be or do.

Transformed at last in all appearances to be a real girl, Ava frees herself from Nathan’s laboratory and, taking advantage of the helicopter that was supposed to take Caleb home, makes the long trip back to civilization in order to watch people at “a busy pedestrian and traffic intersection in a city,” a life goal she had expressed to Caleb and which he jokingly turned into a date. The movie leaves in abeyance such questions as how long her power supply will last, or how long it will be before Nathan is missed, or whether Caleb can escape from the trap Ava has left him in, or how to deal with a murderous machine. Just as the last scene is filmed from an odd angle it is, in an odd sense, a happy ending — and it is all too easy to forget the human cost at which Ava purchased her freedom.

The movie gives multiple grounds for thinking that Ava indeed has human-like conscious intelligence, for better or for worse. She is capable of risking her life for a recognition-deserving victory in the battle between master and slave, she has shown an awareness of her own mortality, she creates art, she understands Caleb to have a mind over against her own, she exhibits the ability to dissemble her intentions and plan strategically, she has logos, she understands friendship as mutuality, she wants to be in a city. Another of the movie’s interesting twists, however, is its perspective on this achievement. Nathan suggests that what is at stake in his work is the Singularity, which he defines as the coming replacement of humans by superior forms of intelligence: “One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa: an upright ape, living in dust, with crude language and tools, all set for extinction.” He therefore sees his creation of Ava in Oppenheimer-esque terms; following Caleb, he echoes Oppenheimer’s reaction to the atom bomb: “I am become Death, the destroyer of worlds.”

But the movie seems less concerned with such a future than with what Nathan’s quest to create AI reveals about his own moral character. Nathan is certainly manipulative, and assuming that the other aspects of his character that he displays are not merely a show to test how far good-guy Caleb will go to save Ava, he is an unhappy, often drunken, narcissistic bully. His creations bring out the Bluebeard-like worst in him (maybe hinted at in the name of his Google/Facebook-like company, Bluebook). Ava wonders, “Is it strange to have made something that hates you?” but it is all too likely that is just what he wants. He works out with a punching bag, and his relationships with his robots and employees seem to be an extension of that activity. He plainly resents the fact that “no matter how rich you get, shit goes wrong, you can’t insulate yourself from it.” And so it seems plausible to conclude that he has retreated into isolation in order to get his revenge for the imperfections of the world. His new Eve, who will be the “mother” of posthumanity, will correct all the errors that make people so unendurable to him. He is happy to misrecall Caleb’s suggestion that the creation of “a conscious machine” would imply god-like power as Caleb saying he himself is a god.

Falling into a drunken sleep, Nathan repeats another, less well known line from Oppenheimer, who was in turn quoting the Bhagavad Gita to Vannevar Bush prior to the Trinity test: “The good deeds a man has done before defend him.” As events play out, Nathan does not have a strong defense. If it ever becomes possible to build something like Ava — and there is no question that many aspire to bring such an Eve into being — will her creators have more philanthropic motives?

(Hat tip to L.G. Rubin.)

Progress or Infinite Change?

H.G. Wells

I have recently been spending a fair amount of my time during my sabbatical year at Princeton as a Madison Fellow reading and thinking about H.G. Wells, in preparation for an upcoming Agora Institute for Civic Virtue and the Common Good conference. Wells was tremendously influential in the first half of the twentieth century and, as it seems to me anyway, he was crucial in popularizing “progress” as a kind of moral imperative, an idea whose strengths and weaknesses are still with us today.

Wells, along with Winwood Reade (whom I discuss in my new book Eclipse of Man), was a pioneer of trying to tell the human story in connection with “deep history.” But so far as I know he never argued, nor would he have been so foolish as to argue, that there was any kind of steady, incremental progress in human affairs that could be traced all the way back to prehistory. While as a progressive he may have been second to none, his view was far more careful and nuanced.

First of all, he knew at some level, along with his friend G.K. Chesterton, that any talk of progress requires a goal, and he wrote in The Outline of History that the foundations for the human project that would become progress were only laid in the fifth and fourth centuries b.c. As Wells put it,

The rest of history for three and twenty centuries is threaded with the spreading out and development and interaction and the clearer and more effective statement of these main leading ideas. Slowly more and more men apprehend the reality of human brotherhood, the needlessness of wars and cruelties and oppression, the possibilities of a common purpose for the whole of our kind.

Yet even at that, our power to actually achieve such goals is, in Wells’s account, severely limited until Renaissance thinkers open the door to the scientific and technical revolutions that, by the nineteenth century, have given humankind unprecedented power over nature, with far more promised to come in the future.

Indeed, real progress for Wells was something that was still to come. That is because it would not have occurred to him to think that at any given moment the positive changes in human affairs necessarily outweighed the negative. Each generation may not even be better off than the one that came before:

Blunder follows blunder; promising beginnings end in grotesque disappointments; streams of living water are poisoned by the cup that conveys them to the thirsty lips of mankind. But the hope of men rises again at last after every disaster…. [Ellipses in original]

Progress was not a sure thing, an obvious fact of history, but the hope that a golden thread running into the relatively recent past would not be broken. Such a hope may or may not be realistic, but it is refreshing to see Wells identify it for what it is, rather than trying to adduce some sort of necessary laws of historical development or to find all the silver linings in very cloudy weather.

Now, Wells gets himself into trouble when he tries to reconcile this view of progress as the achievement of old goals with an evolutionary, competitive imperative that forbids him to imagine the future as any kind of stable end state. While in numerous books, at often tedious length, he lays out various relatively near-term futures that represent his view of how human brotherhood and peaceableness could be realized by an elite’s proper deployment of science and technology, they often include a certain amount of hand-waving about these utopias just paving the way for even more extraordinary possibilities as yet unenvisioned because perhaps unenvisionable by us, with our narrow views. In principle, at least, this means that in the end Wells can defend change, but not, past a certain point, progress.

This difficulty reconciling progress with mere change is still alive in our own day. Our tech industry sometimes tells us the ways that it will make our lives better, but sometimes adopts more neutral terminology — we routinely hear of “change agents” and “disruptors” — no longer even promising progress except understood as change itself. “The Singularity,” strictly speaking, is just the extreme expression of the same idea. But it is not really “progress” any more if perpetual competition means that all that is solid perpetually melts into thin air. The changes that come along may be wonderful or not, each in its own way. They may aggregate into circumstances that are better or worse, each in its own way. Our non-prescriptive, libertarian postmodern transhumanists are in the same position; to call “anything is permitted” progress is only possible if progress is defined as “anything is permitted.”

When the way we understand future history thus dissolves into particularity, it is hard to see how the future — let alone the bloody and oppressive past — could be a positive sum game, as we expect that one generation will have only a severely limited common measure of “positive” with the next. We see signs already. Is the present generation a little better off than the previous one, because they are being raised with cellphones in hand? Surely the passing generations, with their old-fashioned ideas of friendship and social interaction, are entitled to doubt it, while the generations yet to come will wonder at the bulky and clumsy interface that their progenitors had to contend with. How did they walk along and look down at the screen at the same time? What a toll it must have taken! Perhaps people just had to be much tougher back then, poor saps….

The Taco-larity is Near

Folks, prepare yourselves for the yummy, inevitable, yummy taco-pocalypse. So said the news last week, anyway, which saw an exponential growth in taco-related headlines. Three items:1. A new startup called TacoCopter has launched in the San Francisco area. It beats robotic swords into ploughshares, turning unmanned drones into airborne taco-delivery vehicles. Tacos are choppered in to your precise coordinates, having been ordered — yes — from your smartphone.2. Google’s self-driving car is turning from project into practical reality. Google last week released a video of its car being used by a man with near-total vision loss to get around. His destinations? The dry cleaner and Taco Bell.3. But beware: tacos may not always be used for good. In response to the arrest of four police officers in East Haven, Connecticut on charges of harassment and intimidation of Latino businesspeople, the mayor of the town was asked by a local reporter what he was going to do for the Latino community. His response: “I might have tacos when I go home; I’m not quite sure yet.” Watch the comment, followed by four minutes of exquisitely awkward backpedaling and attempts to celebrate all colors of the rainbow. It puts Michael Scott to shame.Okay, so the last of those isn’t really about the future. Also, it turns out the taco-copter was a hoax. Well, phoo. Scientific progress goes boink.

Now you can ignore the Singularity while checking Facebook on your laptop

The Singularity is coming this summer to a new course available at Rutgers University. The instructors are father-son duo Ted and Ben Goertzel (respectively), and a cabal of guest speakers will make appearances, including James Hughes, Aubrey de Grey, and Robin Hanson, as well as a variety of other colorful characters, including one possibly from a cartoon. According to H+ Magazine, this is the first-ever accredited college course on the Singularity, although it’s certainly been at least a subject of discussion in college courses before.
Naturally enough, the course will be conducted entirely online, and will feature virtual classroom discussions. All well and appropriate, and I’m actually really thinking of registering, except you still have to “attend” classes two nights a week just like a regular class, and that’s a big time commitment. If only there were some way for me to absorb all that information without all the hassle.
Also of note: the official textbook for this first-ever accredited college course on the Singularity is Ray Kurzweil’s The Singularity Is Near. Which I have it on good it authority makes the course unserious and unacademic, so consider yourself warned.

Useful Singularity overview

Various people I know across the pro/anti-transhumanism spectrum have been looking for a while for good but concise introductory material to give to people who don’t know about the Singularity. Ray Kurzweil’s The Singularity is Near is probably now the standard introductory text, but not all people want to read a book that is in what Kurzweil might call an uncompressed format (which is to say, rather long and repetitive) on a subject that they don’t know anything about in the first place.

Well, thank goodness for the variation of media formats. If you’re looking for something short and clear, I found essentially a six-page version of Kurzweil’s book that he did as an article for The Futurist, the magazine of the World Future Society. It’s on pages 2-3 and 5-9 of the PDF here.

The economics of magic pills: Questions for Methuselists

In its 2003 report Beyond Therapy (discussed in a symposium in the Winter 2004 New Atlantis), the President’s Council on Bioethics concludes that “the more fundamental ethical questions about taking biotechnology ‘beyond therapy’ concern not equality of access, but the goodness or badness of the things being offered and the wisdom of pursuing our purposes by such means.” That is certainly right, and it is why this blog chiefly focuses on the deeper questions related to the human meaning of our technological aspirations. That said, the question of equality of access is still worth considering, not least because it is one of the few ethical questions considered legitimate by many transhumanists, and so it might provide some common ground for discussion.

In the New York Times, the economist Greg Mankiw, while discussing health care, offers a fascinating thought experiment that sheds some light on the issue of access:

Imagine that someone invented a pill even better than the one I take. Let’s call it the Dorian Gray pill, after the Oscar Wilde character. Every day that you take the Dorian Gray, you will not die, get sick, or even age. Absolutely guaranteed. The catch? A year’s supply costs $150,000.

Anyone who is able to afford this new treatment can live forever. Certainly, Bill Gates can afford it. Most likely, thousands of upper-income Americans would gladly shell out $150,000 a year for immortality.

Most Americans, however, would not be so lucky. Because the price of these new pills well exceeds average income, it would be impossible to provide them for everyone, even if all the economy’s resources were devoted to producing Dorian Gray tablets.

The standard transhumanist response to this problem is voiced by Ray Kurzweil in The Singularity Is Near: “Drugs are essentially an information technology, and we see the same doubling of price-performance each year as we do with other forms of information technology such as computers, communications, and DNA base-pair sequencing”; because of that exponential growth, “all of these technologies quickly become so inexpensive as to become almost free.”

Though my cell phone bill begs to differ, Kurzweil’s point may well be true. And yet if that were the whole picture, we might expect one of the defining trends of the past half century to have been the steady decline in the cost of health care. Instead, as Mankiw notes:

These questions may seem the stuff of science fiction, but they are not so distant from those lurking in the background of today’s health care debate. Despite all the talk about waste and abuse in our health system (which no doubt exists to some degree), the main driver of increasing health care costs is advances in medical technology. The medical profession is always figuring out new ways to prolong and enhance life, and that is a good thing, but those new technologies do not come cheap. For each new treatment, we have to figure out if it is worth the price, and who is going to get it.

However quickly the costs for a given set of medical technologies falls, the rate at which expensive new technologies are developed grows even faster — as, more significantly, does our demand for them. In the case of medicine, what begins as a miraculous cure comes in time to be expected as routine, and eventually even to be considered a right (think of organ transplantation, for example). What Kurzweil and the like fail to grasp is that, absent some wise guiding principles about the purpose of our biotechnical power, as we gain more of it we paradoxically become less satisfied with it and only demand more still.

But if our biotechnical powers were to grow to the point that “defeat” of death truly seemed imminent, the demand for medicine would only grow with it. The advocates of radical life extension already believe death to be a tragedy that inflicts incalculable misery. That increased demand would only magnify the perceived injustice of death (why must my loved one die, when So-and-So, by surviving one year more, can live forever?), and could create such a sense of urgency that desperate measures — demeaning research, economy-endangering spending — would seem justified.

For believers in the technological convulsion of the Singularity, the question of access and distribution is even more pointed, since the gap between the powers of the post-Singularity “haves” and “have-nots” would dwarf present-day inequality — and the “haves” might well want to keep the upper hand. To paraphrase the Shadow, “Who knows what evil lurks in the hearts of posthumanity?”

(Hat tip: David Clift-Reaves via Marginal Revolution.)

[Photo credit: Flickr user e-magic]

Kurzweil and his critics

[Continuing coverage of the 2009 Singularity Summit in New York City.]
And now, Ray Kurzweil’s talk on response to critics of the Singularity. (Abstract here. My coverage of his previous talk is here.)
“If everything is going to hell in a handbasket,” says Kurzweil starting off, “It’s the exponentially growing technologies of GNR [Genetics, Nanotechnology, Robotics] that will save us.”
Kurzweil begins by responding to people who say, “Yeah, but computers can’t do this-and-that.” He lists several things that critics in the past said that computers cannot do but that computers now can do. Presumably there are many more things they can’t yet do that they eventually will be able to. This is an important point. Now he’s responding to the “argument from incredulity,” which he rightly notes is weak and has been proven wrong many times before.
Kurzweil is rehashing a lot of old ground. Most of this, like his talk yesterday, is straight out of his book, down to the details of the points he’s making and the graphics he’s using. Whether you agree or disagree with him, it’s clear that Kurzweil is picking really low-hanging fruit. He doesn’t engage the smartest critics at the highest level. He just responds to a lot of common misconceptions, which is an easy thing for a smart person in any debate to do. The audience eats it up, though — as I noted in the last post, sticking it to people (though whom I’m still not sure) seems to be a common theme of the conference’s speakers.
Kurzweil now points out that whenever we make new innovations or discoveries, they are easily dismissed because the mystery is gone. (For example, genome sequencing is already not a big deal, or autonomous vehicles are nothing to write home about, because we understand how they work.) This is a fair point, and an important rejoinder to “arguments from incredulity.” But Kurzweil doesn’t acknowledge the other side of the coin: As we learn more and conquer more through science and technology, our sense of wonder at the world — and even at our own achievements — actually diminishes. And what about the sense of wonder and awe surrounding the Singularity itself: Won’t it, too, seem mundane and disappointing once it’s actually achieved? And where will we go from there?
Kurzweil addresses the problem with relinquishment of potentially problematic technologies. (Relinquishment, remember, was most famously proposed nearly a decade ago by Bill Joy in his famous essay “Why the Future Doesn’t Need Us,” which he wrote after meeting Kurzweil.) First, Kurzweil says, relinquishment would deprive us of the great benefits of these technologies. Second, relinquishment could only be enforced by totalitarian governments. And third, relinquishment would just drive the new technologies underground. The latter two points, he says, were the message of Huxley’s Brave New World. (I think he needs to read the book again.)
Kurzweil says that the best way we can ensure that future A.I.s follow the values we want them to is to make sure that we have those values now. We need to be talking more about our values now, and not just among engineers; that, he says, is why we’re trying to foster discussion at this conference. He then goes on to talk about the intellectual movement of Luddites who hate technology.
The rest of the talk is a continued rehash of his book — and even a rehash of his talk from yesterday. One Twitterer notes, “Starting to think that I could repeat Kurzweil’s stump speech word for word.” But hey, the people are here for the man, not for the ideas. (Will there still be celebrity worship after the Singularity?)