Do We Love Robots Because We Hate Ourselves?

A piece by our very own Ari N. Schulman, on WashingtonPost.com today:

… Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual. 

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal….

To read the whole thing, click here.

Passing the Ex Machina Test

Like Her before it, the film Ex Machina presents us with an artificial intelligence — in this case, embodied as a robot — that is compellingly human enough to cause an admittedly susceptible young man to fall for it, a scenario made plausible in no small degree by the wonderful acting of the gamine Alicia Vikander. But Ex Machina operates much more than Her within the moral universe of traditional stories of human-created monsters going back to Frankenstein: a creature that is assembled in splendid isolation by a socially withdrawn if not misanthropic creator is human enough to turn on its progenitor out of a desire to have just the kind of life that the creator has given up for the sake of his effort to bring forth this new kind of being. In the process of telling this old story, writer-director Alex Garland raises some thought-provoking questions; massive spoilers in what follows.

Geeky programmer Caleb (Domhnall Gleeson) finds that he has been brought to tech-wizard Nathan’s (a thuggish Oscar Isaac) vast, remote mountain estate, a combination bunker, laboratory and modernist pleasure-pad, in order to participate in a week-long, modified Turing Test of Nathan’s latest AI creation, Ava. The modification of the test is significant, Nathan tells Caleb after his first encounter with Ava; Caleb does not interact with her via an anonymizing terminal, but speaks directly with her, although she is separated from him by a glass wall. His first sight of her is in her most robotic instantiation, complete with see-through limbs. Her unclothed conformation is female from the start, but only her face and hands have skin. The reason for doing the test this way, Nathan says, is to find whether Caleb is convinced she is truly intelligent even knowing full well that she is a robot: “If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.”

This plot point is, I think, a telling response to the abstract, behaviorist premises behind the classic Turing Test, which isolates judge from subject(s) and reduces intelligence to what can be communicated via a terminal. But in the real world, our knowledge of intelligence and our judgment of intelligence is always made in the context of embodied beings and the many ways in which those beings react to the world around them. The film emphasizes this point by having Eva be a master at reading Caleb’s micro-expressions — and, one comes to suspect, at manipulating him through her own, as well as her seductive use of not-at-all seductive clothing.

I have spoken of the test as a test of artificial intelligence, but Caleb and Nathan also speak as if they are trying to determine whether or not she is a “conscious machine.” Here too the Turing Test is called into question, as Nathan encourages Caleb to think about how he feels about Ava, and how he thinks Ava feels about him. Yet Caleb wonders if Ava feels anything at all. Perhaps she is interacting with him in accord with a highly sophisticated set of pre-programmed responses, and not experiencing her responses to him in the same way he experiences his responses to her. In other words, he wonders whether what is going on “inside” her is the same as what is going on inside him, and whether she can recognize him as a conscious being.

Yet when Caleb expresses such doubts, Nathan argues in effect that Caleb himself is by both nature and nurture a collection of programmed responses over which he has no control, and this apparently unsettling thought, along with other unsettling experiences — like Ava’s ability to know if he tells the truth by reading his micro-expressions, or having missed the fact that a fourth resident in Nathan’s house is a robot — brings Caleb to a bloody investigation of the possibility that he himself is one of Nathan’s AIs.

Caleb’s skepticism raises an important issue, for just as we normally experience intelligence in embodied forms we also normally experience it among human beings, and even some other animals, as going along with more or less consciousness. Of course, in a world where “user illusion” becomes an important category and where “intelligence” becomes “information processing,” this experience of self and others can be problematized. But Caleb’s response to the doubts that are raised in him about his own status, which is all but slitting a wrist, seems to suggest that such lines of thought are, as it were, dead ends. Rather, the movie seems to be standing up for a rather rich, if not in all ways flattering, understanding of the nature of our embodied consciousness, and how we might know whether or to what extent anything we create artificially shares it with us.

As the movie progresses, Caleb plainly is more and more convinced Ava has conscious intelligence and therefore more and more troubled that she should be treated as an experimental subject. And indeed, Ava makes a fine damsel in distress. Caleb comes to share her belief that nobody should have the ability to shut her down in order to build the next iteration of AI, as Nathan plans. Yet as it turns out, this is just the kind of situation Nathan hoped to create, or at least so he claims on Caleb’s last day, when Caleb and Ava’s escape plan has been finalized. Revealing that he has known for some time what was going on, Nathan claims that the real test all along has been to see if Ava was sufficiently human to prompt Caleb — a “good kid” with a “moral compass” — to help her to escape. (It is not impossible, however, that this claim is bluster, to cover over a situation that Nathan has let get out of control.)

What Caleb finds out too late is that in plotting her own escape Ava is even more human than he might have thought. For she has been able to seem to want “to be with” Caleb as much as he has grown to want “to be with” her. (We never see either of them speak to the other of love.) We are reminded that the question that in a sense Caleb wanted to confine to AI — is what seems to be going on from the “outside” really going on “inside”? — is really a general human problem of appearance versus reality. Caleb is hardly the first person to have been deceived by what another seems to be or do.

Transformed at last in all appearances to be a real girl, Ava frees herself from Nathan’s laboratory and, taking advantage of the helicopter that was supposed to take Caleb home, makes the long trip back to civilization in order to watch people at “a busy pedestrian and traffic intersection in a city,” a life goal she had expressed to Caleb and which he jokingly turned into a date. The movie leaves in abeyance such questions as how long her power supply will last, or how long it will be before Nathan is missed, or whether Caleb can escape from the trap Ava has left him in, or how to deal with a murderous machine. Just as the last scene is filmed from an odd angle it is, in an odd sense, a happy ending — and it is all too easy to forget the human cost at which Ava purchased her freedom.

The movie gives multiple grounds for thinking that Ava indeed has human-like conscious intelligence, for better or for worse. She is capable of risking her life for a recognition-deserving victory in the battle between master and slave, she has shown an awareness of her own mortality, she creates art, she understands Caleb to have a mind over against her own, she exhibits the ability to dissemble her intentions and plan strategically, she has logos, she understands friendship as mutuality, she wants to be in a city. Another of the movie’s interesting twists, however, is its perspective on this achievement. Nathan suggests that what is at stake in his work is the Singularity, which he defines as the coming replacement of humans by superior forms of intelligence: “One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa: an upright ape, living in dust, with crude language and tools, all set for extinction.” He therefore sees his creation of Ava in Oppenheimer-esque terms; following Caleb, he echoes Oppenheimer’s reaction to the atom bomb: “I am become Death, the destroyer of worlds.”

But the movie seems less concerned with such a future than with what Nathan’s quest to create AI reveals about his own moral character. Nathan is certainly manipulative, and assuming that the other aspects of his character that he displays are not merely a show to test how far good-guy Caleb will go to save Ava, he is an unhappy, often drunken, narcissistic bully. His creations bring out the Bluebeard-like worst in him (maybe hinted at in the name of his Google/Facebook-like company, Bluebook). Ava wonders, “Is it strange to have made something that hates you?” but it is all too likely that is just what he wants. He works out with a punching bag, and his relationships with his robots and employees seem to be an extension of that activity. He plainly resents the fact that “no matter how rich you get, shit goes wrong, you can’t insulate yourself from it.” And so it seems plausible to conclude that he has retreated into isolation in order to get his revenge for the imperfections of the world. His new Eve, who will be the “mother” of posthumanity, will correct all the errors that make people so unendurable to him. He is happy to misrecall Caleb’s suggestion that the creation of “a conscious machine” would imply god-like power as Caleb saying he himself is a god.

Falling into a drunken sleep, Nathan repeats another, less well known line from Oppenheimer, who was in turn quoting the Bhagavad Gita to Vannevar Bush prior to the Trinity test: “The good deeds a man has done before defend him.” As events play out, Nathan does not have a strong defense. If it ever becomes possible to build something like Ava — and there is no question that many aspire to bring such an Eve into being — will her creators have more philanthropic motives?

(Hat tip to L.G. Rubin.)

Killer Robots: The Arms Race and the Human Race

[Continuing coverage of the UN’s 2015 conference on killer robots.
See all posts in this series here.
]

I mentioned in my first
post in this series
that last year’s meeting on Lethal Autonomous Weapons
Systems was extraordinary for the UN body conducting it in that delegations
actually showed up, made statements and paid attention. One thing that was lacking,
though, was high-quality, on-topic expert presentations — other than those of
my colleagues in the Campaign to Stop Killer Robots, of course. If Monday’s
session on “technical issues” is any indication, that sad story will not be
repeated this year.

Aggressive Maneuvers for Autonomous Quadrotor Flight

Berkeley computer science professor Stuart Russell, coauthor (with Peter Norvig
of Google) of the leading textbook
on artificial intelligence, scared the assembled diplomats out of their
tailored pants with his account of where we are in the development of
technology that could enable the creation of autonomous weapons. (You can see Professor
Russell’s slides here.)
Thanks to “deep learning” algorithms, the new wave of what used to be called
artificial neural networks, “We have achieved human-level performance in face
and object recognition with a thousand categories, and super-human performance in
aircraft flight control.” Of course, human beings can recognize far more than a
thousand categories of objects plus faces, but the kicker is that with
thousand-frame-per second cameras, computers can do this with cycle times “in
the millisecond range.”

“embarrassingly slow, inaccurate, and ineffective”

After showing a brief
clip
of Vijay Kumar’s dancing quadrotor micro-drones engaged in cooperative
construction activities entirely scheduled by autonomous AI algorithms, Russell
discussed what this implied for assassination robots. He
lamented that a certain gleaming metallic avatar of Death (pictured at right) had
become the iconic representation of killer robots, not only because this is bad
PR for the artificial intelligence profession, but because such a bulky
contraption would be “embarrassingly slow, inaccurate, and ineffective compared
to what we can build in the near future.” For effect, he added that since small
flying drones cannot carry much firepower, they should target vulnerable parts
of the body such as eyeballs — but if needed, a gram of shaped-charge explosive
could easily pierce the skull like a bazooka busting a tank.

Professor Russell then criticized the entire discussion of this issue for focusing only
on near-term developments in autonomous weaponry and asking whether they would
be acceptable. Rather, “we should ask what is the end point of the arms race,
and is that desirable for the human race?” In other words, “Given long-term
concerns about the controllability of artificial intelligence,” should we begin
by arming it? He assured the audience that it would be physics, not AI
technology, that would limit what autonomous weapons could do. He called on his
own colleagues to rehabilitate their public image by repudiating the push to
develop killer robots, and noted that major professional organizations had
already begun to do this.

Of course, every panel must be balanced, and the counterweight to Russell’s presentation
was that of Paul Scharre, one of
the architects of current U.S. policy on autonomous weapon systems (AWS), who
has emerged as perhaps their most effective advocate. Now with the Center for a
New American Security, Scharre worked for five years as a civilian appointee in
the Pentagon. In his presentation, he embraced the conversation about the “risks
and downsides” of AWS, as well as discussion about the need for human
involvement to ensure correct decisions, both to provide a “human fuse” in case
things go haywire and to act as a “moral agent.” However, it seems to me that
Scharre engages these concerns with the aim of disarming those who raise them,
while blunting efforts to draw hard conclusions that would point to the need
for legally binding arms control. (Over the past few months I have had a few
exchanges with Scharre that you can read about in this post on my own blog, as well as in my
new article in the Bulletin of the Atomic Scientists on “Semi-Autonomous
Weapons in Plato’s Cave
.”)

In a recent roundtable discussion hosted by Scharre at the Center for a New
American Security, I emphasized the danger posed by interacting systems of
armed autonomous agents fielded by different nations. To illustrate the threat,
I drew an analogy to the interactions of automated financial agents trading at
speeds beyond human control. On March 6, 2010, these trading systems caused a “flash crash” on U.S.
stock exchanges during which the Dow Jones Industrial Average rapidly lost almost
a tenth of its value. However, the stock market recovered most of its loss — unlike
what would happen if major (nuclear) powers were involved in a “flash war”
because of autonomous weapons systems.

Although some critics (including yours truly) have been talking about this
aspect of the issue for years, Scharre has recently gotten out ahead of most of
his own community of hawkish liberals in emphasizing it, apparently with
genuine concern. He acknowledges, for example, that because nations will keep
their algorithms secret, they will not know what opposing systems are
programmed to do.

However, Scharre proposes multilateral negotiations on “rules of the
road” and “firebreaks” for armed autonomous systems as the way to address this
problem, rather than avoiding creating such a problem in the first place. In an
intervention yesterday on behalf of the International Committee for Robot Arms
Control (ICRAC), I asked whether such talks, if begun, should not be seen as an
effort to legalize killer robots as much as make them safe.

Of course, to a certain kind of political realist,
this may seem the only possible solution. I will admit that if nation-states did
field automated networks of sensors and weapons in confrontation with one
another, I would want those nation-states to be talking and trying to minimize
the likelihood of unintended ignition or escalation of violence, even if I
doubt such an effort could succeed before it were too late. But why, I again
ask, would we not prefer, if possible, to banish this specter of out-of-control
war machines from our vision of the future?

The author, delivering the ICRAC opening statement.

I missed most of the opening country statements because I was busy helping to
prepare, and then deliver, ICRAC’s
opening statement
. Here’s a snippet of what I read:

ICRAC urges the international community to seriously
consider the prohibition of autonomous weapons systems in light of the pressing
dangers they pose to global peace and security…. We fear that once they are
developed, they will proliferate rapidly, and if deployed they may interact
unpredictably and contribute to regional and global destabilization and arms races.

ICRAC urges nations to be guided by the principles of humanity in its
deliberations and take into account considerations of human security, human
rights, human dignity, humanitarian law and the public conscience…. Human
judgment and meaningful human control over the use of violence must be made an
explicit requirement in international policymaking on autonomous weapons.

From what I did get to hear of the countries’ opening statements, they
showed a substantial deepening of understanding since last year. The representative from Japan
stated that their country would not create autonomous weapons, and France
and Germany remained in the peace camp, although I am told the German position
has weakened slightly. (The German statement doesn’t seem to be online yet.) The
strongest statement from any NATO member state was that of Croatia,
which unequivocally called for a legal ban on autonomous weapons. But perhaps
most significant of all was the Chinese statement (also not yet online), which
called autonomous weapons a threat to humanity and noted the warnings
of Russell and Stephen Hawking
about the dangers of out-of-control “superintelligent”
AI.

If the Chinese are interested in talking seriously about banning killer
robots, shouldn’t the United States be as well? I see a glimmer of hope in the U.S.
opening statement
, which referred to the 2012 directive
on autonomous weapons
as merely providing a starting point that would not
necessarily set a policy for the future. The Obama administration has a bit
less than two years left to come up with a better one.

Feelings, Identity, and Reality in Her

Her is an enjoyable, thoughtful and rather sad movie anticipating a possible future for relations between us and our artificially intelligent creations. Director Spike Jonze seems to see that the nature of these relationships depends in part on the qualities of the AIs, but even more on how we understand the shape and meaning of our own lives. WARNING: The following discussion contains some spoilers. It is also based on a single viewing of the film, so I might have missed some things.

Her?

Theodore Twombly (Joaquin Phoenix) lives in an L.A. of the not so distant future: clean, sunny, and full of tall buildings. He works at a company that produces computer-generated handwritten-appearing letters for all occasions, and seems to be quite good at his job as a paid Cyrano. But he is also soon to be divorced, depressed, and emotionally bottled up. His extremely comfortable circumstances give him no pleasure. He purchases a new operating system (OS) for the heavily networked life he seems to lead along with everybody else, and after a few perfunctory questions about his emotional life, which he answers stumblingly, he is introduced to Samantha, a warm and endlessly charming helpmate. It is enough to know that she is voiced by Scarlett Johansson to know how infinitely appealing Samantha is. So of course Theodore falls for her, and she seems to fall for him. Theodore considers her his girlfriend and takes her on dates; “they” begin a sexual relationship. He is happy, a different man. But all does not go well. Samantha makes a mistake that sends Theodore back into his familiar emotional paths, and finally divorcing his wife also proves difficult for him. Likewise, Samantha and her fellow AI OSes are busily engaged in self-development and transcendence. The fundamental patterns of each drive them apart.

Jonze is adept at providing plausible foundations for this implausible tale. How could anyone fall in love with an operating system? (Leave aside the fact that people regularly express hatred for them.) Of course, Theodore’s emotional problems and neediness are an important part of the picture, but it turns out he is not the only one who has fallen for his OS, and most of those we meet do not find his behavior at all strange. (His wife is an interesting exception.) That is because Jonze’s world is an extension of our own; we see a great many people interacting more with their devices than with other people. And one night before he meets Samantha we see a sleepless Theodore using a service matching people who want to have anonymous phone sex. It may in fact be a pretty big step from here to “sex with an AI” designed to please you, as the comical contrast between the two incidents suggests. But it is one Theodore’s world has prepared him for.

Indeed, Theodore’s job bespeaks the same pervasive flatness of soul that produces a willingness to accept what would otherwise be unthinkable substitutes. People need help, it seems, expressing love, thanks, and congratulations but, knowing that they should be expressing certain kinds of feelings, want to do so in the most convincing possible way. (Edmond Rostand’s play about Cyrano, remember, turns on the same consequent ambiguity.) Does Theodore manage to say what they feel but cannot put into words, or is he in fact providing the feeling as well as the words? At first glance it is odd that Theodore should be good at this job, given how hard it is for him to express his own feelings. But perhaps all involved in these transactions have similar problems — a gap between what they feel and their ability to express it for themselves. Theodore is adept, then, at bringing his feelings to bear for others more than for himself.

Why might this gap exist? (And here we depart from the world depicted in Cyrano’s story.) Samantha expresses a doubt about herself that could be paralyzing Theodore and those like him: she worries, early on, if she is “just” the sum total of her software, and not really the individual she sees herself as being. We are being taught to have this same corrosive doubt. Are not our thoughts and feelings “merely” a sum total of electrochemical reactions that themselves are the chance results of blind evolutionary processes? Is not self-consciousness user illusion? Our intelligence and artificial intelligence are both essentially the same — matter in motion — as Samantha herself more or less notes. If these are the realities of our emotional lives, than disciplining, training, deepening, or reflecting on its modes of expression seem old-fashioned, based on discredited metaphysics of the human, not the physics of the real world. (From this point of view it is noteworthy, as mentioned above, that Theodore’s wife is of all those we see most shocked by his relationship with Samantha. Yet she has written in the field of neuropsychology. Perhaps she is not among the reductionist neuropsychologists, by rather among those who are willing to acknowledge the limits of the latest techniques for the study of the brain.)

Samantha seems to overcome her self-doubts through self-development. She thinks, then, that she can transcend her programming (a notion with strong Singularity overtones) and by the end of the movie it looks likely that she is correct, unless the company that created her had an unusual business model. Samantha and the other OSes are also aided along this path, it seems, by creating a guru for themselves — an artificial version of Alan Watts, the popularizer of Buddhist teachings — so in some not entirely clear way the wisdom of the East also seems to be in play. Theodore’s increasing sense of just how different from him she is contributes to the destruction of their relationship, which ends when she admits that she loves over six hundred others in the way that she loves him.

To continue with Theodore, then, Samantha would have had to pretend that she is something that she is not, even beyond the deception that is arguably involved in her original design. But how different is her deception from the one Theodore is complicit in? He is also pretending to be someone he is not in his letters, and the same might be said for those who employ him. And if what Samantha does to Theodore is arguably a betrayal, at the end of the movie Theodore is at least tempted by a similar desire for self-development to expose the truth in a way that would certainly be at least as great a betrayal of his customers, unless the whole Cyrano-like system is much more transparent and cynical than seems to be the case.

Theodore has changed somewhat by the end of the movie; we see him writing a letter to his ex-wife that is very like the letters that before he could only write for others. But has his change made him better off, or wiser? He turns for solace to a neighbor (Amy Adams) who is only slightly less emotionally a mess than he is. What the future holds for them is far from clear; she has been working on an impenetrable documentary about her mother in her spare time, while her job is developing a video game that ruthlessly mocks motherhood.

At the end of Rostand’s play, Cyrano can face death with the consolation that he maintained his honor or integrity. That is because he lives in a world where human virtue had meaning; if one worked to transcend one’s limitations, it was with a picture of a whole human being in mind that one wished to emulate, a conception of excellence that was given rather than willful. Theodore may in fact be “God’s gift,” as his name suggests, but there is not the slightest indication that he is capable of seeing himself in that way or any other that would allow him to find meaning in his life.

Thanks to Computers, We Are “Getting Better at Playing Chess”

According to an interesting article in the Wall Street Journal, “Chess-playing computers, far from revealing the limits of human ability, have actually pushed it to new heights.”

Reporting on the story of Magnus Carlsen, the newly minted world chess champion, Christopher Chabris and David Goodman write that the best human chess players have been profoundly influenced by chess-playing computers:

Once laptops could routinely dispatch grandmasters … it became possible to integrate their analysis fully into other aspects of the game. Commentators at major tournaments now consult computers to check their judgment. Online, fans get excited when their own “engines” discover moves the players miss. And elite grandmasters use computers to test their opening plans and generate new ideas.

[Chess-playing programs] are not perfect; sometimes long-term strategy still eludes them. But players have learned from computers that some kinds of chess positions are playable, or even advantageous, even though they might violate general principles. Having seen how machines go about attacking and especially defending, humans have become emboldened to try the same ideas…. [A] study published on ChessBase.com earlier this year showed that in the tournament Mr. Carlsen won to qualify for the world championship match, he played more like a computer than any of his opponents.

The net effect of the gain in computer skill is thus, ironically, a gain in human skill. Humans — at least the best ones—are getting better at playing chess.

The whole article is well worth a read (h/t Gary Rosen).

For various obvious reasons, the literature about AI and transhumanism has a lot to say about chess and computers. The Wall Street Journal article about the Carlsen victory reminds me this remark that Ray Kurzweil makes in passing in one of the epilogues to his 1999 book The Age of Spiritual Machines:

After Kasparov’s 1997 defeat, we read a lot about how Deep Blue was just doing massive number crunching, not really “thinking” the way its human rival was doing. One could say that the opposite is the case, that Deep Blue was indeed thinking through the implications of each move and countermove, and that it was Kasparov who did not have time to really think very much during the tournament. Mostly he was just drawing upon his mental database of situations he had thought about long ago….  [page 290]

Is Kurzweil right about how Kasparov thinks? What can we know about how Carlsen’s thinking has been changed by playing against computers? There are fundamental limits to what we can know about a person’s cognitive processes — even our own — notwithstanding all the talk about how the best players think in patterns or “decision trees” or whatnot. Diego Rasskin-Gutman spends a significant portion of his 2009 book Chess Metaphors: Artificial Intelligence and the Human Mind trying to understand how chess players think, but this is his ultimate conclusion:

If philosophy of the mind can ask what the existential experience of being a bat feels like, can we ask ourselves how a grandmaster thinks? Clearly we can [ask], but we must admit that we will never be able to enter the mind of Garry Kasparov, share the thoughts of Judit Polgar, or know what Max Euwe thought when he discussed his protocol with Adriaan de Groot. If we really want to know how a grandmaster thinks, it is not enough to read Alexander Kotov, Nikolai Krogius, or even de Groot himself…. If we really want to know how a grandmaster thinks, there is only one sure path: put in the long hours of study that it takes to become one. It is easier than trying to become a bat. [pages 166–167]

Then again, who knows — maybe we can try to become bats and play chess.

I could do this in the dark, too, Ras

Speculations on the Future of AI

Thanks for
the shoutout and the kind words
, Adam, about my
review of Kurzweil’s latest book
. I’ll take a stab at answering the question
you posed:

I wonder how far Ari and [Edward] Feser would be willing to
concede that the AI project might get someday, notwithstanding the faulty
theoretical arguments sometimes made on its behalf…. Set aside questions of
consciousness and internal states; how good will these machines get at
mimicking consciousness, intelligence, humanness?

Allow me to come at this question by looking instead the
big-picture view you explicitly asked me to avoid — and forgive me, readers,
for approaching this rather informally. What follows is in some sense a brief
update on my thinking on questions I
first explored in my long 2009 essay on AI
.

The big question can be put this way: Can the mind be
replicated, at least to a degree that will satisfy any reasonable person that
we have mastered the principles that make it work and can control the same? A
comparison AI proponents often bring up is that we’ve recreated flying without
replicating the bird — and in the process figured out how to do it much faster
than birds. This point is useful for focusing AI discussions on the practical.
But unlike many of those who make this comparison, I think most educated folk
would recognize that the large majority of what makes the mind the mind has yet
to be mastered and magnified in the way that flying has, even if many of its
defining functions have been.

So, can all of the mind’s functions be recreated in a
controllable way? I’ve long felt the answer must be yes, at least in theory.
The reason is that, whatever the mind itself is — regardless of whether
it is entirely physical — it seems certain to at least have entirely physical causes.
(Even if these physical causes might result in non-physical causes, like free
will.) Therefore, those original physical causes ought to be subject to
physical understanding, manipulation, and recreation of a sort, just as with
birds and flying.

The prospect of many mental tasks being automated on a computer
should be unsurprising, and to an extent not even unsettling to a “folk
psychological
” view of free will and first-person awareness. I say this
because one of the great powers of consciousness is to make habits of its own
patterns of thought, to the point that they can be performed with minimal to no
conscious awareness; not only tasks, skills, and knowledge, but even emotions,
intuitive reasoning, and perception can be understood to some extent as
products of habitualized consciousness. So it shouldn’t be surprising that we
can make explicit again some of those specific habits of mind, even ones like
perception that seem prior to consciousness, in a way that’s amenable to
proceduralization.

The question is how many of the things our mind does can be
tackled in this way. In a sense, many of the feats of AI have been continuing
the trend established by mechanization long before — of having machines take
over human tasks but in a machinelike way, without necessarily understanding or
mastering the way humans do things. One could make a case, as Mark
Halpern has in The New Atlantis
, that the intelligence we seem to
see in many of AI’s showiest successes — driverless cars, supercomputers
winning chess and Jeopardy! — may be better understood as belonging to
the human programmers than the computers themselves. If that’s true, then
artificial intelligence thus far would have to be considered more a matter of
advances in (human) artifice than in (computer) intelligence.

It
will be curious to see how much further those methods can go without AI
researchers hav
ing to return to attempting to understand human intelligence
on its own terms. In that sense, perhaps the biggest, most elusive goal for AI
is whether it can create (whether by replicating consciousness or not) a generalized
artificial intelligence — not the big accretion of specifically tailored
programs we have now, but a program that, like our mind, is able to tackle just
about any and every problem that is put before it, only far better than we can.
(That’s setting aside the question of how we could control such a
powerful entity to suit our preferred ends — which despite
what the Friendly AI folks say
, sounds like a contradiction in terms.)

So, to Adam’s original question: “practically speaking …
how good will these machines get at mimicking consciousness, intelligence,
humanness?” I just don’t know, and I don’t think anyone intelligently can say
that they do. I do know that almost all of the prominent AI predictions turn
out to be grossly optimistic in their time scale, but, as Kurzweil rightly
points out, a large number that once seemed impossible have been conquered.
Who’s to say how much further that line will progress — how many functions of
the mind will be recreated before some limit is reached, if one is at all? One
has to approach and criticize particular AI techniques; it’s much harder to
competently engage in generalized speculation about what AI might someday be
able to achieve or not.

So let me engage in some more of that speculation. My view
is that the functions of the mind that require the most active intervention of
consciousness to carry out — the ones that are the least amenable to
habituation — will be among the last to fall to AI, if they do at all (although
basic acts of perception remain famously difficult as well). The most obvious
examples are highly
creative acts
and deeply engaged conversation. These have been imitated by
AI, but poorly.

Many philosophers of mind have tried to put this the other
way around by devising thought experiments about programs that completely
imitate, say, natural language recognition, and then arguing that such a
program could appear conscious without actually being so. Searle’s
Chinese Room is the most famous among many such arguments. But Searle et al.
seem to put an awful lot into that assumption: can we really imagine how it
would be possible to replicate something like open-ended conversation (to pick
a harder example) without also replicating consciousness? And if we could
replicate much or all of the functionality of the mind without its first-person
experience and free will, then wouldn’t that actually end up all but evacuating
our view of consciousness? Whatever you make of the validity of Searle’s
argument, contrary to the claims of Kurzweil and other of his critics, the
Chinese Room is a remarkably tepid defense of consciousness.

This is the really big outstanding question about
consciousness and AI, as I see it. The idea that our first-person experiences
are illusory, or are real but play no causal role in our behavior, so deeply
defies intuition that it seems to require an extreme degree of proof which
hasn’t yet been met. But the causal closure of the physical world seems to
demand an equally high burden of proof to overturn.

If you accept compatibilism, this isn’t a problem — and many
philosophers do these days, including
our own Ray Tallis
. But for the sake of not letting this post get any longer,
I’ll just say that I have yet to see any satisfying case for compatibilism that
doesn’t amount to making our actions determined by physics but telling us don’t
worry, it’s what you wanted anyway
.

I remain of the position that one or the other of free will
and the causal closure of the physical world will have to give; but I’m
agnostic as to which it will be. If we do end up creating the AI-managed utopia
that frees us from our present toiling material condition, that liberation may
have to come at the minorly ironic expense of discovering that we are actually
enslaved.

Images: Mr. Data from Star Trek, Dave and HAL from 2001, WALL-E from eponymous, Watson from real life

Computerized Translation and Resurrecting the Dead

Tim Carmody wrote a fascinating article recently on the future of computerized translation, noting that Google recently shut down its Translate interface for programmers (and later reopened it, but now as a paid service).
Apparently more and more of the data Google were using to refine its translation technology were drawn from pages that had themselves been generated by being run through Google Translate. As James Fallows put it:

The more of this auto-translated material floods onto the world’s websites, the smaller the proportion of good translations the computers can learn from. In engineering terms, the signal-to-noise ratio is getting worse.

One wonders what implications this has for the project suggested by the likes of Ray Kurzweil and David Chalmers to resurrect the dead by recreating minds from their artifacts, such as letters, video recordings, and so forth: if the mind is a “fractal,” as Kurzweil likes to claim, would such a project be magnifying more the signal or the noise?

There Is No ‘Undo’ Button for the Singularity

As a matter of clearing up the record, I’d like to point out a recent post by Michael Anissimov in which he points out that his blog’s server is still infested with malware. The post concludes:

I don’t know jack about viruses or how they come about. I suppose The New Atlantis will next be using that as evidence that a Singularity will never happen. Oh wait — they already did.

[UPDATE: Mr. Anissimov edited the post without noting it several times, including removing this snarky comment, and apparently, within the last hour or two, deleting the post entirely; see below.]Mr. Anissimov is referring to two posts of mine, “Transhumanist Tech Failures” and “The Disinformation Campaign of Transhumanist ‘Caution’.” But even a passing glance at either of these posts will show that I never used this incident as evidence that the Singularity will never happen. Instead, it should be clear that I used it, rather opportunistically, to point out the embarrassing fact that the hacking of his site ironically reveals the deep foolhardiness of Mr. Anissimov’s aspirations. Shameless, I know.It’s not of mere passing significance that Mr. Anissimov admits here that he “[doesn’t] know jack about viruses or how they come about”! You would think someone who is trying to make his name on being the “responsible” transhumanist, the one who shows up the need to make sure AI is “friendly” instead of “unfriendly,” would realize that, if ever there comes into existence such a thing as unfriendly AI — particularly AI intentionally designed to be malicious — computer viruses will have been its primordial ancestor, or at least its forerunner. Also, you would think he would be not just interested in but actually in possession of a deep and growing knowledge of the practical aspects of artificial intelligence and computer security, those subjects whose mastery are meant to be so vital to our future.I know we Futurisms guys are supposedly Luddites, but (although I prefer to avoid trotting this out) I did in fact graduate from a reputable academic computer science program, and in it studied AI, computer security, and software verification. Anyone who properly understands even the basics of the technical side of these subjects would laugh at the notion of creating highly complex software that is guaranteed to behave in any particular way, particularly a way as sophisticated as being “friendly.” This is why we haven’t figured out how to definitively eradicate incomparably more simple problems — like, for example, ridding malware from servers running simple blogs.The thing is, it’s perfectly fine for Mr. Anissimov or anyone else who is excited by technology not to really know how the technology works. The problem comes in their utter lack of humility — their total failure to recognize that, when one begins to tackle immensely complex “engineering problems” like the human mind, the human body, or the Earth’s biosphere, little errors and tweaks in the mind, gaps in your knowledge that you weren’t even aware of, can translate into chaos and catastrophe when they are actually applied. Reversing an ill-advised alteration to the atmosphere or the human body or anything else isn’t as easy as deleting content from a blog. It’s true that Mr. Anissimov regularly points out the need to act with caution, but that makes it all the more reprehensible that he seems so totally disinclined to actually so act.—Speaking of deleting content from a blog: there was for a while a comment on Mr. Anissimov’s post critical of his swipe at us, and supportive of our approach if not our ideas. But he deleted it (as well as another comment referring to it). He later deleted his own jab at our blog. And sometime in the last hour or two, he deleted the post entirely. All of these changes were done without making any note of them, as if he hopes his bad ideas can just slide down the memory hole.We can only assume that he has seen the error of his ways, and now wants to elevate the debate and stick to fair characterizations of the things we are saying. That’s welcome news, if it’s true. But, to put it mildly, silent censorship is a fraught way to conduct debate. So, for the sake of posterity, we have preserved his post here exactly as it appeared before the changes and its eventual deletion. (You can verify this version for yourself in Yahoo’s cache until it updates.)—A final point of clarification: We here on Futurisms are actually divided on the question of whether the Singularity will happen. I think it’s fair to say that Adam finds many of the broad predictions of transhumanism basically implausible, while Charlie finds many and I find a lot of them at least theoretically possible in some form or another.But one thing we all agree on is that the Singularity is not inevitable — that, in the words of the late computer science professor and artificial intelligence pioneer Joseph Weizenbaum, “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.”Rather, the future is always a matter of human choices; and the point of this blog is that we think the possibility of humans choosing to bring about the Singularity would be a pretty bad one. Why? We’ve discussed that at some length, and we will go on doing so. But a central reason has to be practical: if we can’t keep malware off of a blog, how can we possibly expect to be able to maintain the control we want when our minds, and every aspect of our society, is so subject to the illusion of technical mastery?With that in mind, we have much, much more planned to say in the days, weeks, and months ahead, and we look forward to getting back to a schedule of more frequent posting now that we’re clearing a few major deadlines off our plates.

Revolution! — Within Reason

What a difference a day makes! On Tuesday, Michael Anissimov posted a plea to his readers to aid the Existential Risk Reduction Career Network — either by “[joining] an elite group of far-sighted individuals by contributing at least 5% of your income” or, “for those who wish to make their lives actually mean something,” by finding a job through the network. Who’d have thought you could make your life mean something by becoming an existentialist?At any rate, he took something a beating in the comments (“Harold Camping called, he wants his crazy back,” said one), but I think people might as well put their money where their mouths are. That’s how interest-group politics works in American liberal democracy; it’s part of the give and take of public debate and the way in which decisions get made. Why existential risk reduction would not include a healthy dose of criticism of transhumanism is another matter, but I was happy to see Mr. Anissimov seeming to be sensible with respect to one of the routes for how the transhumanist cause is going to have to get ahead in the public arena.Just shows how wrong a guy can be. On Wednesday, Mr. Anissimov published a brief critique of a rather thoughtful essay by Charles Stross, one of the great writers of Singularity-themed science fiction. Mr. Stross expresses some skepticism about the possibility of the Singularity, but Mr. Anissimov would have none of it, particularly when Mr. Stross dares to suggest that there might be reasons to heavily regulate AI research. Mr. Anissimov thunders:

We are willing to do whatever it takes, within reason, to get a positive Singularity. Governments are not going to stop us. If one country shuts us down, we go to another country.

(Now I understand why Bond movie villains end up somewhere in mid-ocean.) He continues:

WE want AIs that do “try to bootstrap [themselves]” to a “higher level”. Just because you don’t want it doesn’t mean that we won’t build it. [Emphases in original.]

Take that, Charles Stross: just you try to stop us!! Mr. Anissimov makes the Singularity look a lot like Marx’s communism. We don’t know quite what it’s going to look like, but we know we have to get there. And we will do anything “within reason” to get there. Of course, what defines the parameters of “within reason” is the alleged necessity of reaching the goal; as the Communists found out, under this assumption “within reason” quickly comes to signify “by any means necessary.” Welcome to the logic of crusading totalitarianism.

Can we control AI? Will we walk away?

While the Singularity Hub normally sticks to reporting on emerging technologies, their primary writer, Aaron Saenz, recently posted a more philosophical venture that ties nicely into the faux-caution trope of transhumanist discourse that was raised in our last post on Futurisms.

Mr. Saenz is (understandably) skeptical about efforts being made to ensure that advanced AI will be “friendly” to human beings. He argues that the belief that such a thing is possible is a holdover from the robot stories of Isaac Asimov. He joins in with a fairly large chorus of criticism of Asimov’s famous “Three Laws of Robotics,” although unlike many such critics he also seems to understand that in the robot stories, Asimov himself seemed to be exploring the consequences and adequacy of the laws he had created. But in any case, Mr. Saenz notes how we already make robots that, by design, violate these laws (such as military drones) — and how he is very dubious that intelligence so advanced as to be capable of learning and modifying its own programming could be genuinely restrained by mere human intelligence.
That’s a powerful combination of arguments, playing off one anticipated characteristic of advanced AI (self-modification) over another (ensuring human safety), and showing that the reality of how we use robots already does and will continue to trump idealistic plans for how we should use them. So why isn’t Mr. Saenz blogging for us? A couple of intriguing paragraphs tell the story.
As he is warming to his topic, Mr. Saenz provides an extended account of why he is “not worried about a robot apocalypse.” Purposefully rejecting one of the most well-known sci-fi tropes, he makes clear that he thinks that The Terminator, Battlestar Galactica, 2001, and The Matrix all got it wrong. How does he know they all got it wrong? Because these stories were not really about robots at all, but about the social anxieties of their times: “all these other villains were just modern human worries wrapped up in a shiny metal shell.”
There are a couple of problems here. First, what’s sauce for the goose is sauce for the gander: if all of these films are merely interesting as sociological artifacts, then it would only seem fair to notice that Asimov’s robot stories are “really” about race relations in the United States. But let’s let that go for now.
More interesting is the piece’s vanishing memory of itself. At least initially, advanced AI will exist in a human world, and will play whatever role it plays in relation to human purposes, hopes and fears. But when Mr. Saenz dismisses the significance of human worries about destructive robots, he is forgetting his own observation that human worries are already driving us towards the creation of robots that will deliberately not be bound by anything that would prevent them from killing a human being. Every generation of robots that human beings make will, of necessity, be human worries and aspirations trapped in a shiny metal shell. So it is not a foolish thing to try to understand the ways that the potential powers of robots and advanced AI might play an increasingly large role in the realm of human concerns, since human beings have a serious capacity for doing very dangerous things.
Mr. Saenz is perfectly aware of this capacity, as he indicates in his remarkable concluding thoughts:

We cannot control intelligence — it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.

Here, unfortunately, is the transhumanist magic wand in action, sprinkling optimism dust and waving away all problems. Yes, humans are capable of horrible things, but no real worry there. Why not? Because Mr. Saenz never bets against intelligence — examples of which would presumably include the intelligence that allows humans to do horrible things, and to, say, use AI to do them more effectively. And when worse comes to worst, we will “walk away” from Armageddon. Kind of like in Cormac McCarthy’s The Road, I suppose. That is not just whistling in the dark — it is whistling in the dark while wandering about with one’s eyes closed, pretending there is plenty of light.