Automation, Robotics, and the Economy

The Joint Economic Committee — a congressional committee with members from both the Senate and the House of Representatives — invited me to testify in a hearing yesterday on “the transformative impact of robots and automation.” The other witnesses were Andrew McAfee, an M.I.T. professor and coauthor of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (his written testimony is here) and Harry Holzer, a Georgetown economist who has written about the relationship between automation and the minimum wage (his written testimony is here).

My written testimony appears below, slightly edited to include a couple things that arose during the hearing. Part of the written testimony is based on an essay I wrote a few years ago with Ari Schulman called “The Problem with ‘Friendly’ Artificial Intelligence.” Video of the entire hearing can be found here.

*   *   *

Testimony Presented to the Joint Economic Committee:
The Transformative Impact of Robots and Automation
Adam Keiper
Fellow, Ethics and Public Policy Center
Editor, The New Atlantis
May 25, 2016

Mr. Chairman, Ranking Member Maloney, and members of the committee, thank you for the opportunity to participate in this important hearing on robotics and automation. These aspects of technology have already had widespread economic consequences, and in the years ahead they are likely to profoundly reshape our economic and social lives.

Today’s hearing is not the first time Congress has discussed these subjects. In fact, in October 1955, a subcommittee of this very committee held a hearing on automation and technological change.[1] That hearing went on for two weeks, with witnesses mostly drawn from industry and labor. It is remarkable how much of the public discussion about automation today echoes the ideas debated in that hearing. Despite vast changes in technology, in the economy, and in society over the past six decades, many of the worries, the hopes, and the proposed solutions suggested in our present-day literature on automation, robotics, and employment would sound familiar to the members and witnesses present at that 1955 hearing.

It would be difficult to point to any specific policy outcomes from that old hearing, but it is nonetheless an admirable example of responsible legislators grappling with immensely complicated questions. A free people must strive to govern its technologies and not passively be governed by them. So it is an honor to be a part of that tradition with today’s hearing.

In my remarks, I wish to make five big, broad points, some of them obvious, some more counterintuitive.

(1) WHY IT IS SO HARD TO KNOW THE FUTURE
A good place to start discussions of this sort is with a few words of gratitude and humility. Gratitude, that is, for the many wonders that automation, robotics, and artificial intelligence have already made possible. They have made existing goods and services cheaper, and helped us to create new kinds of goods and services, contributing to our prosperity and our material wellbeing.

And humility because of the poorness of our ability to peer into the future. When reviewing the mountains of books and magazine articles that have sought to predict what the future holds in automation and related fields, when reading the hyped tech headlines or when looking at the many charts and tables extrapolating from the past to help us forecast the future, it is striking to see how often our predictions go wrong.

Very little energy has been invested in systematically understanding why futurism fails — that is, why, beyond the simple fact that the future hasn’t happened yet, we have generally not been very good at predicting what it will look like. For the sake of today’s discussion, I want to raise just a few points, each of which can be helpful in clarifying our thinking when it comes to automation and robotics.

First there is the problem of timeframes. Very often, economic analyses and tech predictions about automation discuss kinds of jobs that are likely to be automated without any real discussion of when. This leads to strange conversations, as when one person is interested in what the advent of driverless vehicles might mean for the trucking industry, and his interlocutor is more interested in, say, the possible rise of artificial superintelligences that could wipe out all life on Earth. The timeframes under discussion at any given moment ought to be explicitly stated.

Second there is the problem of context. Debates about the future of one kind of technology rarely take into account other technologies that might be developed, and how those other technologies might affect the one under discussion. When one area of technology advances, others do not just stand still. How might automation and robotics be affected by developments in energy use and storage, or advanced nanotechnology (sometimes also called molecular manufacturing), or virtual reality and augmented reality, or brain-machine interfaces, or various biotechnologies, or a dozen other fields?

And of course it’s not only other technologies that evolve. In order to be invented, built, used, and sustained, all technologies are enmeshed in a web of cultural practices and mores, and legal and political norms. These things do not stand still either — and yet when discussing the future of a given technology, rarely is attention paid to the way these things touch upon one another.

All of which is to say that, as you listen to our conversation here today, or as you read books and articles about the future of automation and robotics, try to keep in mind what I call the “chain of uncertainties”:


Just because something is conceivable or imaginable
does not mean it is possible.
Even if it is possible, that does not mean it will happen.
Even if it happens, that does not mean it will happen in the way you envisioned.
And even if it happens in something like the way you envisioned,
there will be unintended, unexpected consequences





(2) WHY THIS TIME IS DIFFERENT
Automation is not new. For thousands of years we have made tools to help us accomplish difficult or dangerous or dirty or tedious or tiresome tasks, and in some sense today’s new tools are just extensions of what came before. And worries about automation are not new either  — they date back at least to the early days of the Industrial Revolution, when the Luddites revolted in England over the mechanization and centralization of textile production. As I mentioned above, this committee was already discussing automation some six decades ago — thinking about thinking machines and about new mechanical modes of manufacturing.

What makes today any different?

There are two reasons today’s concerns about automation are fundamentally different from what came before. First, the kinds of “thinking” that our machines are capable of doing is changing, so that it is becoming possible to hand off to our machines ever more of our cognitive work. As computers advance and as breakthroughs in artificial intelligence (AI) chip away at the list of uniquely human capacities, it becomes possible to do old things in new ways and to do new things we have never before imagined.

Second, we are also instantiating intelligence in new ways, creating new kinds of machines that can navigate and move about in and manipulate the physical world. Although we have for almost a century imagined how robotics might transform our world, the recent blizzard of technical breakthroughs in movement, sensing, control, and (to a lesser extent) power is bringing us for the first time into a world of autonomous, mobile entities that are neither human nor animal.

To simplify a vast technical and economic literature, there are basically three futurist scenarios for what the next several decades hold in automation, robotics, and artificial intelligence:

Scenario 1 – Automation and artificial intelligence will continue to advance, but at a pace sufficiently slow that society and the economy can gradually absorb the changes, so that people can take advantage of the new possibilities without suffering the most disruptive effects. The job market will change, but in something like the way it has evolved over the last half-century: some kinds of jobs will disappear, but new kinds of jobs will be created, and by and large people will be able to adapt to the shifting demands on them while enjoying the great benefits that automation makes possible.

Scenario 2 – Automation, robotics, and artificial intelligence will advance very rapidly. Jobs will disappear at a pace that will make it difficult for the workforce to adapt without widespread pain. The kinds of jobs that will be threatened will increasingly be jobs that had been relatively immune to automation — the “high-skilled” jobs that generally involved creativity and problem-solving, and the “low-skilled” jobs that involved manual dexterity or some degree of adaptability and interpersonal relations. The pressures on low-skilled American workers will exacerbate the pressures already felt because of competition against foreign workers paid lower wages. Among the disappearing jobs may be those at the lower-wage end of the spectrum that we have counted on for decades to instill basic workplace skills and values in our young people, and that have served as a kind of employment safety net for older people transitioning in their lives. And the balance between labor and capital may (at least for a time) shift sharply in favor of capital, as the share of gross domestic product (GDP) that flows to the owners of physical capital (e.g., the owners of artificial intelligences and robots) rises and the share of GDP that goes to workers falls. If this scenario unfolds quickly, it could involve severe economic disruption, perhaps social unrest, and maybe calls for political reform. The disconnect between productivity and employment and income in this scenario also highlights the growing inadequacy of GDP as our chief economic statistic: it can still be a useful indicator in international competition, but as an indicator of economic wellbeing, or as a proxy for the material satisfaction or happiness of the American citizen, it is clearly not succeeding.

Scenario 3 – Advances in automation, robotics, and artificial intelligence will produce something utterly new. Even within this scenario, the range of possibilities is vast. Perhaps we will see the creation of “emulations,” minds that have been “uploaded” into computers. Perhaps we will see the rise of powerful artificial “superintelligences,” unpredictable and dangerous. Perhaps we will reach a “Singularity” moment after which everything that matters most will be different from what came before. These types of possibilities are increasingly matters of discussion for technologists, but their very radicalness makes it difficult to say much about what they might mean at a human scale — except insofar as they might involve the extinction of humanity as we know it. [NOTE: During the hearing, Representative Don Beyer asked me whether he and other policymakers should be worried about consciousness emerging from AI; he mentioned Elon Musk and Stephen Hawking as two individuals who have suggested we should worry about this. “Think Terminator,” he said. I told him that these possibilities “at the moment … don’t rise to the level of anything that anyone on this committee ought to be concerned about.”]

One can make a plausible case for each of these three scenarios. But rather than discussing their likelihood or examining some of the assumptions and aspirations inherent in each scenario, in the limited time remaining, I am going to turn to three other broad subjects: some of the legal questions raised by advances in artificial intelligence and automation; some of the policy ideas that have been proposed to mitigate some of the anticipated effects of these changes; and a deeper understanding of the meaning of work in human life.

(3) LOOMING LEGAL QUESTIONS
The advancement of artificial intelligence and autonomous robots will raise questions of law and governance that scholars are just beginning to grapple with. These questions are likely to have growing economic and perhaps political consequences in the years to come, no matter which of the three scenarios above you consider likeliest.

The questions we might be expected to face will emerge in matters of liability and malpractice and torts, property and contractual law, international law, and perhaps laws related to legal personhood. Although there are precedents — sometimes in unusual corners of the law — for some of the questions we will face, others will arise from the very novelty of the artificial autonomous actors in our midst.

By way of example, here are a few questions, starting with one that has already made its way into the mainstream press:

  • When a self-driving vehicle crashes into property or harms a person, who is liable? Who will pay damages?
     
  • When a patient is harmed or dies during a surgical operation conducted by an autonomous robotic device upon the recommendation of a human physician, who is liable and who pays?
     
  • If a robot is autonomous but is not considered a person, who owns the creative works it produces?
     
  • In a combat setting, who is to be held responsible, and in what way, if an autonomous robot deployed by the U.S. military kills civilian noncombatants in violation of the laws of war?
     
  • Is there any threshold of demonstrable achievement — any performed ability or set of capacities — that a robot or artificial intelligence could cross in order to be entitled to legal personhood?

These kinds of questions raise matters of justice, of course, but they have economic implications as well — not only in terms of the money involved in litigating cases, but in terms of the effects that the legal regime in place will have on the further development and implementation of artificial intelligence and robotics. It will be up to lawyers and judges, and lawmakers at the federal, state, and local levels, to work through these and many other such matters.

(4) PROPOSED SOLUTIONS AND THEIR PROBLEMS
There are, broadly speaking, two kinds of ideas that have most often been set forth in recent years to address the employment problems that may be created by an increasingly automated and AI-dominated economy.

The first category involves adapting workers to the new economy. The workers of today, and even more the workers of tomorrow, will need to be able to pick up and move to where the jobs are. They should engage in “lifelong learning” and “upskilling” whenever possible to make themselves as attractive as possible to future employers. Flexibility must be their byword.

Of course, education and flexibility are good things; they can make us resilient in the face of the “creative destruction” of a churning free economy. Yet we must remember that “workers” are not just workers; they are not just individuals free and detached and able to go wherever and do whatever the market demands. They are also members of families — children and parents and siblings and so on — and members of communities, with the web of connections and ties those memberships imply. And maximizing flexibility can be detrimental to those kinds of relationships, relationships that are necessary for human flourishing.

The other category of proposal involves a universal basic income — or what is sometimes called a “negative income tax” — guaranteed to every individual, even if he or she does not work. This can sound, in our contemporary political context, like a proposal for redistributing wealth, and it is true that there are progressive theorists and anti-capitalist activists who support it. But this idea has also been discussed favorably for various reasons by prominent conservative and libertarian thinkers. It is an intriguing idea, and one without many real-life models that we can study (although Finland is currently contemplating an interesting partial experiment).

A guaranteed income certainly would represent a sea change in our nation’s economic system and a fundamental transformation in the relationship between citizens and the state, but perhaps this transformation would be suited to the technological challenge we may face in the years ahead. Some of the smartest and most thoughtful analysts have discussed how to avoid the most obvious problems a guaranteed income might create — such as the problem of disincentivizing work. Especially provocative is the depiction of guaranteed income that appears in a 2008 book written by Joseph V. Kennedy, a former senior economist with the Joint Economic Committee; in his version of the policy, the guaranteed income would be structured in such a way as to encourage a number of good behaviors. Anyone interested in seriously considering guaranteed income should read Kennedy’s book.[2]

(5) THE MEANING OF HUMAN WORK
Should we really be worrying so much about the effects of robots on employment? Maybe with the proper policies in place we can get through a painful transition and reach a future date when we no longer need to work. After all, shouldn’t we agree with Arthur C. Clarke that “The goal of the future is full unemployment”?[3] Why work?

This notion, it seems to me, raises deep questions about who and what we are as human beings, and the ways in which we find purpose in our lives. A full discussion of this subject would require drinking deeply of the best literary and historical investigations of work in human life — examining how work is not only toil for which we are compensated, but how it also can be a source of dignity, structure, meaning, friendship, and fulfillment.

For present purposes, however, I want to just point to two competing visions of the future as we think about work. Because, although science fiction offers us many visions of the future in which man is destroyed by robots, or merges with them to become cyborgs, it offers basically just two visions of the future in which man coexists with highly intelligent machines. Each of these visions has an implicit anthropology — an understanding of what it means to be a human being. In each vision, we can see a kind of liberation of human nature, an account of what mankind would be in the absence of privation. And in each vision, some latent human urges and longings emerge to dominate over others, pointing to two opposing inclinations we see in ourselves.

The first vision is that of the techno-optimist or -utopian: Thanks to the labor and intelligence of our machines, all our material wants are met and we are able to lead lives of religious fulfillment, practice our hobbies, pursue our intellectual and creative interests.

Recall John Adams’s famous 1780 letter to Abigail: “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”[4] This is somewhat like the dream imagined in countless stories and films, in which our robots make possible a Golden Age that allows us to transcend crass material concerns and all become gardeners, artists, dreamers, thinkers, lovers.

By contrast, the other vision is the one depicted in the 2008 film WALL-E, and more darkly in many earlier stories — a future in which humanity becomes a race of Homer Simpsons, a leisure society of consumption and entertainment turned to endomorphic excess. The culminating achievement of human ingenuity, robotic beings that are smarter, stronger, and better than ourselves, transforms us into beings dumber, weaker, and worse than ourselves. TV-watching, video-game-playing blobs, we lose even the energy and attention required for proper hedonism: human relations wither and natural procreation declines or ceases. Freed from the struggle for basic needs, we lose a genuine impulse to strive; bereft of any civic, political, intellectual, romantic, or spiritual ambition, when we do have the energy to get up, we are disengaged from our fellow man, inclined toward selfishness, impatience, and lack of sympathy. Those few who realize our plight suffer from crushing ennui. Life becomes nasty, brutish, and long.

Personally, I don’t think either vision is quite right. I think each vision — the one in which we become more godlike, the other of which we become more like beasts — is a kind of deformation. There is good reason to challenge some of the technical claims and some of the aspirations of the AI cheerleaders, and there is good reason to believe that we are in important respects stuck with human nature, that we are simultaneously beings of base want and transcendent aspiration; finite but able to conceive of the infinite; destined, paradoxically, to be free.

CONCLUSION
Mr. Chairman, the rise of automation, robotics, and artificial intelligence raises many questions that extend far beyond the matters of economics and employment that we’ve discussed today — including practical, social, moral, and perhaps even existential questions. In the years ahead, legislators and regulators will be called upon to address these technological changes, to respond to some things that have already begun to take shape and to foreclose other possibilities. Knowing when and how to act will, as always, require prudence.

In the years ahead, as we contemplate both the blessings and the burdens of these new technologies, my hope is that we will strive, whenever possible to exercise human responsibility, to protect human dignity, and to use our creations for the improvement of truly human flourishing.

Thank you.

____________
NOTES:


[1] “Automation and Technological Change,” hearings before the Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, Congress of the United States, Eighty-fourth Congress, first session, October 14, 15, 17, 18, 24, 25, 26, 27, and 28, 1955 (Washington, D.C.: G.P.O., 1955), http://www.jec.senate.gov/public/index.cfm/1956/12/report-970887a6-35a4-47e3-9bb0-c3cdf82ec429.


[2]  Joseph V. Kennedy, Ending Poverty: Changing Behavior, Guaranteeing Income, and Reforming Government (Lanham, Md.: Rowman and Littlefield, 2008).


[3]  Arthur C. Clarke, quoted by Jerome Agel, “Cocktail Party” (column), The Realist 86, Nov.–Dec. 1969, page 32, http://ep.tc/realist/86/32.html. This article is a teaser for a book Agel edited called The Making of Kubrick’s 2001 (New York: New American Library/Signet, 1970), where the same quotation from Clarke appears on page 311. Italics added. The full quote reads as follows: “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.”


[4]  John Adams to Abigail Adams (letter), May 12, 1780, Founders Online, National Archives (http://founders.archives.gov/documents/Adams/04-03-02-0258). Source: The Adams Papers, Adams Family Correspondence, vol. 3, April 1778 – September 1780, eds. L. H. Butterfield and Marc Friedlaender (Cambridge, Mass.: Harvard, 1973), pages 341–343.

Passing the Ex Machina Test

Like Her before it, the film Ex Machina presents us with an artificial intelligence — in this case, embodied as a robot — that is compellingly human enough to cause an admittedly susceptible young man to fall for it, a scenario made plausible in no small degree by the wonderful acting of the gamine Alicia Vikander. But Ex Machina operates much more than Her within the moral universe of traditional stories of human-created monsters going back to Frankenstein: a creature that is assembled in splendid isolation by a socially withdrawn if not misanthropic creator is human enough to turn on its progenitor out of a desire to have just the kind of life that the creator has given up for the sake of his effort to bring forth this new kind of being. In the process of telling this old story, writer-director Alex Garland raises some thought-provoking questions; massive spoilers in what follows.

Geeky programmer Caleb (Domhnall Gleeson) finds that he has been brought to tech-wizard Nathan’s (a thuggish Oscar Isaac) vast, remote mountain estate, a combination bunker, laboratory and modernist pleasure-pad, in order to participate in a week-long, modified Turing Test of Nathan’s latest AI creation, Ava. The modification of the test is significant, Nathan tells Caleb after his first encounter with Ava; Caleb does not interact with her via an anonymizing terminal, but speaks directly with her, although she is separated from him by a glass wall. His first sight of her is in her most robotic instantiation, complete with see-through limbs. Her unclothed conformation is female from the start, but only her face and hands have skin. The reason for doing the test this way, Nathan says, is to find whether Caleb is convinced she is truly intelligent even knowing full well that she is a robot: “If I hid Ava from you, so you just heard her voice, she would pass for human. The real test is to show you that she’s a robot and then see if you still feel she has consciousness.”

This plot point is, I think, a telling response to the abstract, behaviorist premises behind the classic Turing Test, which isolates judge from subject(s) and reduces intelligence to what can be communicated via a terminal. But in the real world, our knowledge of intelligence and our judgment of intelligence is always made in the context of embodied beings and the many ways in which those beings react to the world around them. The film emphasizes this point by having Eva be a master at reading Caleb’s micro-expressions — and, one comes to suspect, at manipulating him through her own, as well as her seductive use of not-at-all seductive clothing.

I have spoken of the test as a test of artificial intelligence, but Caleb and Nathan also speak as if they are trying to determine whether or not she is a “conscious machine.” Here too the Turing Test is called into question, as Nathan encourages Caleb to think about how he feels about Ava, and how he thinks Ava feels about him. Yet Caleb wonders if Ava feels anything at all. Perhaps she is interacting with him in accord with a highly sophisticated set of pre-programmed responses, and not experiencing her responses to him in the same way he experiences his responses to her. In other words, he wonders whether what is going on “inside” her is the same as what is going on inside him, and whether she can recognize him as a conscious being.

Yet when Caleb expresses such doubts, Nathan argues in effect that Caleb himself is by both nature and nurture a collection of programmed responses over which he has no control, and this apparently unsettling thought, along with other unsettling experiences — like Ava’s ability to know if he tells the truth by reading his micro-expressions, or having missed the fact that a fourth resident in Nathan’s house is a robot — brings Caleb to a bloody investigation of the possibility that he himself is one of Nathan’s AIs.

Caleb’s skepticism raises an important issue, for just as we normally experience intelligence in embodied forms we also normally experience it among human beings, and even some other animals, as going along with more or less consciousness. Of course, in a world where “user illusion” becomes an important category and where “intelligence” becomes “information processing,” this experience of self and others can be problematized. But Caleb’s response to the doubts that are raised in him about his own status, which is all but slitting a wrist, seems to suggest that such lines of thought are, as it were, dead ends. Rather, the movie seems to be standing up for a rather rich, if not in all ways flattering, understanding of the nature of our embodied consciousness, and how we might know whether or to what extent anything we create artificially shares it with us.

As the movie progresses, Caleb plainly is more and more convinced Ava has conscious intelligence and therefore more and more troubled that she should be treated as an experimental subject. And indeed, Ava makes a fine damsel in distress. Caleb comes to share her belief that nobody should have the ability to shut her down in order to build the next iteration of AI, as Nathan plans. Yet as it turns out, this is just the kind of situation Nathan hoped to create, or at least so he claims on Caleb’s last day, when Caleb and Ava’s escape plan has been finalized. Revealing that he has known for some time what was going on, Nathan claims that the real test all along has been to see if Ava was sufficiently human to prompt Caleb — a “good kid” with a “moral compass” — to help her to escape. (It is not impossible, however, that this claim is bluster, to cover over a situation that Nathan has let get out of control.)

What Caleb finds out too late is that in plotting her own escape Ava is even more human than he might have thought. For she has been able to seem to want “to be with” Caleb as much as he has grown to want “to be with” her. (We never see either of them speak to the other of love.) We are reminded that the question that in a sense Caleb wanted to confine to AI — is what seems to be going on from the “outside” really going on “inside”? — is really a general human problem of appearance versus reality. Caleb is hardly the first person to have been deceived by what another seems to be or do.

Transformed at last in all appearances to be a real girl, Ava frees herself from Nathan’s laboratory and, taking advantage of the helicopter that was supposed to take Caleb home, makes the long trip back to civilization in order to watch people at “a busy pedestrian and traffic intersection in a city,” a life goal she had expressed to Caleb and which he jokingly turned into a date. The movie leaves in abeyance such questions as how long her power supply will last, or how long it will be before Nathan is missed, or whether Caleb can escape from the trap Ava has left him in, or how to deal with a murderous machine. Just as the last scene is filmed from an odd angle it is, in an odd sense, a happy ending — and it is all too easy to forget the human cost at which Ava purchased her freedom.

The movie gives multiple grounds for thinking that Ava indeed has human-like conscious intelligence, for better or for worse. She is capable of risking her life for a recognition-deserving victory in the battle between master and slave, she has shown an awareness of her own mortality, she creates art, she understands Caleb to have a mind over against her own, she exhibits the ability to dissemble her intentions and plan strategically, she has logos, she understands friendship as mutuality, she wants to be in a city. Another of the movie’s interesting twists, however, is its perspective on this achievement. Nathan suggests that what is at stake in his work is the Singularity, which he defines as the coming replacement of humans by superior forms of intelligence: “One day the AIs are gonna look back on us the same way we look at fossil skeletons in the plains of Africa: an upright ape, living in dust, with crude language and tools, all set for extinction.” He therefore sees his creation of Ava in Oppenheimer-esque terms; following Caleb, he echoes Oppenheimer’s reaction to the atom bomb: “I am become Death, the destroyer of worlds.”

But the movie seems less concerned with such a future than with what Nathan’s quest to create AI reveals about his own moral character. Nathan is certainly manipulative, and assuming that the other aspects of his character that he displays are not merely a show to test how far good-guy Caleb will go to save Ava, he is an unhappy, often drunken, narcissistic bully. His creations bring out the Bluebeard-like worst in him (maybe hinted at in the name of his Google/Facebook-like company, Bluebook). Ava wonders, “Is it strange to have made something that hates you?” but it is all too likely that is just what he wants. He works out with a punching bag, and his relationships with his robots and employees seem to be an extension of that activity. He plainly resents the fact that “no matter how rich you get, shit goes wrong, you can’t insulate yourself from it.” And so it seems plausible to conclude that he has retreated into isolation in order to get his revenge for the imperfections of the world. His new Eve, who will be the “mother” of posthumanity, will correct all the errors that make people so unendurable to him. He is happy to misrecall Caleb’s suggestion that the creation of “a conscious machine” would imply god-like power as Caleb saying he himself is a god.

Falling into a drunken sleep, Nathan repeats another, less well known line from Oppenheimer, who was in turn quoting the Bhagavad Gita to Vannevar Bush prior to the Trinity test: “The good deeds a man has done before defend him.” As events play out, Nathan does not have a strong defense. If it ever becomes possible to build something like Ava — and there is no question that many aspire to bring such an Eve into being — will her creators have more philanthropic motives?

(Hat tip to L.G. Rubin.)

Killer Robots: Where Is the World Heading?

[Continuing coverage of the UN’s 2015 conference on killer robots. See all posts in this series here.]

Before I start blogging the kickoff of this week’s United Nations meeting on killer robots, a little background is
called for, both about the issue and my views on it.

I have worked on this issue in different capacities for many years now.
(In fact, I proposed a ban on autonomous weapons as early as 1988, and
again in 2002
and 2004.)
In the present context, the first thing I want to say is about the
Obama administration’s 2012 policy
directive
on Autonomy in Weapon Systems. It was not so much a
decision made by the military as a decision made for the
military after long internal resistance and
at least a decade of debate
within the U.S. Department of Defense.
You may have heard that the directive imposed a moratorium on killer
robots. It did not. Rather, as I
explained
in 2013 in the Bulletin of the Atomic Scientists, it
“establishes a framework for managing legal, ethical, and technical
concerns, and signals to developers and vendors that the Pentagon is
serious about autonomous weapons.” As a Defense Department spokesman
told me directly, the directive “is not a moratorium on anything.” It’s
a full-speed-ahead policy.

What counts as “semi-autonomous”?
Top: Artist’s conception of Lockheed Martin’s
planned Long Range Anti-Ship Missile in flight.

Bottom: The Obama administration would
define the original T-800 Terminator as
merely “semi-autonomous.”

The story of how so many people misinterpreted or were misled by the
directive is complicated, and I won’t get into details right now, but
basically the policy was rather cleverly constructed by strong
proponents of autonomous weapons to deflect concerns about actual
emerging (and some existing) weaponry by suggesting that the real issue
is futuristic machines that independently “select and engage” targets
of their own choosing. These are supposedly placed under close scrutiny
by the policy — but not really. The directive defines a separate category
of “semi-autonomous” weapons which in reality includes everything that
is happening today or is likely to happen in the near future as we
head down the road toward Terminator territory.  A prime example
is Lockheed Martin’s Long Range Anti-Ship
Missile
, a program now entering “accelerated acquisition” with
initial deployment slated for 2018. This wonder-weapon can autonomously
steer itself around emergent threats, scan a wide area searching for an
enemy fleet, identify target ships among civilian vessels and others in
the vicinity, and plan its attack in collaboration with sister missiles
in a salvo. It’s classified as “semi-autonomous,” which under the
policy means it’s given a green light and does not require senior
review. In fact, as I’ve argued,
under the bizarre definition in the administration’s policy, The
Terminator himself (excuse me, itself) could qualify as a merely
“semi-autonomous” weapon system.

If it sounds like I’m casting the United States as the villain here,
let me be clear: the rest of the world is in the game, and they’re
right behind us, but we happen to be the leader, in both technology and
policy. For every type of drone (and here I can be accused of
conflating issues: today’s drones are not autonomous, although some
call them semi-autonomous, but the existence of a close relationship
between drone and autonomous weapons technologies is undeniable) that
the United States has in use or development, China has produced a
similar model, and when the U.S. Navy opened its Laboratory for
Autonomous Systems Research in 2012, Russia responded
by establishing its own military robotics lab the following year. Some
have characterized Russia as “taking
the lead
,” but the reality is
better characterized by the
statement
of a Russian academician that “From the point of view of
theory, engineering and design ideas, we are not in the last place in
the world.”

The Big Dog that has Russia’s military
leadership barking.

At the 2014 LAWS meeting, Russian and Chinese statements were as bland
and obtuse as their American counterparts, but it’s clear that, like
the rest of the world, those countries are watching closely what we do,
and showing that they are not ready to accept “last place.” Russian
deputy prime minister Dmitry Rogozin, head of military industries,
penned an
article
in Rossiya Gazeta in 2014 that amounts to perhaps
the closest thing to an official Russian policy response to the
publicly released U.S. directive: a clarion call to Russian industry,
mired as it is in post-Soviet mediocrity, to step up to the challenge
posed by American achievements like “Big Dog” and to
develop “robotic systems that are fully integrated in the command and
control system, capable not only to gather intelligence and to receive
from the other components of the combat system, but also on their own
strike.” China eschews such straightforwardly belligerent declarations, and
interestingly, the Chinese
closing statement
at last year’s meeting rebuked the American
suggestion to focus on the process of legality reviews for new weapons,
on the grounds that this would exclude countries which did not yet have
autonomous weapons to review — a suggestion of possible Chinese support
for a more activist approach to arms control. But China’s activity in
areas of drones, robots, and artificial intelligence speak for
themselves; China will not accept last place either.

My question for those setting U.S. policy is this: Given that we are
the world’s leader in this technology, but with only a narrow lead at
best, why are we not at least trying to lead in a different direction,
away from a global robot arms race? Why are we not saying that,
of course, we will develop autonomous weapons if necessary, but we
would prefer an arms-control approach, based on strong moral principles
and the overwhelming sentiment of the world’s people (including

strong majorities
among U.S. military personnel)? Why not? Why are
we not even signaling interest in such an approach? Comments are open,
fellas.

In
the days to come, I’ll report on both the expert talks and country
statements, and whatever else I see going on in Geneva, as well as dig
deeper into the underlying issues as they come up. More tomorrow…

Blogging the UN Killer Robots Meeting

[First post in a series covering the UN’s 2015 conference on killer robots. See all posts in the series here.]

Over the next week, I’ll be blogging from Geneva, where 118 nations (if they all show up) will be meeting to discuss “Lethal Autonomous Weapons Systems” (LAWS) and, you know, the fate of humanity. You may have seen headlines about the United Nations trying to outlaw killer robots, which is a bit inaccurate. First of all, the UN can’t actually outlaw anything; Security Council resolutions are supposed to have the force of law on matters of international peace and security, but apart from attempts to shackle miscreants like Iraq, Iran, and North Korea, the Security Council has never tried to impose arms control on the major military powers, most of which can just veto its resolutions anyway. And anyway, the first point is irrelevant; this meeting is taking place under a subsidiary of the UN, the Convention on Certain Conventional Weapons (CCW), whose full name is actually longer and even more boring-sounding than that but has something to do with “excessively injurious” or “indiscriminate” weapons. As an aside, I note that “excessively injurious” weapons are the ones that don’t kill you, not the ones that do. But delegating the issue of autonomous weapons to the CCW is more related to the notion that stupid killer robots, like land mines, would be unable to distinguish civilians from combatants, hence “indiscriminate.”

The author (on the right)

This will actually be the second CCW meeting on LAWS, which is a nice acronym but doesn’t have any official definition. The first meeting, held in 2014, was attended by at least 80 nations, which is very good for a treaty organization whose typical meeting was described by a colleague of mine as “start late, nobody wants to say anything, routine business announcements, and adjourn early.” The 2014 LAWS meeting was nothing like that. The room was packed, expert presentations were listened to intently both in the main sessions and side events, and dozens of countries plus a handful of NGOs made statements. The highlight of the entire week was a statement by the Holy See (Vatican): “… weighing military gain and human suffering… is not reducible to technical matters of programming.” (You can read the full Vatican statement here or listen to it here.) The nadir had to be when the U.S. delegation asserted that the Obama administration’s 2012 policy directive to the military on Autonomy in Weapon Systems represents an example for the rest of the world. Another low point was the closing statement from U.S. State Department legal advisor Stephen Townley, in which he reasserted the same position, adding with condescension that “it is important to remind ourselves that machines do not make decisions.” Oookay, nothing to worry about then, now that we know that autonomy in weapon systems is actually impossible.

Full disclosure: I am a member of one of those NGOs, the International Committee for Robot Arms Control, part of the Campaign to Stop Killer Robots, a multinational coalition led by Human Rights Watch. I don’t speak for them; in fact, I am liable to say things that higher-ups in the hierarchy don’t want to hear (but should listen to, IMHO). But at least you know where I stand (and where I will sit in the big room), in case you were still wondering. I’m grateful to my colleagues on Futurisms for inviting me to blog here, although they may not agree with everything (or anything) I say, either, so please don’t call in drone strikes on them; let me be the martyr, please, if anything I say arouses your human capacity for violence.

Another preview post to come tomorrow, and then more over the next week as the meeting proceeds.

Does the U.S. Really “Lag” on Military Robots?

In response to our post “U.S. Policy on Robots in Warfare,” Mark Gubrud has passed along to us a comment:

It was odd that on the Monday morning after the Friday afternoon when my Bulletin article appeared, John Markoff of the New York Times posted an article whose message many took as contradictory to mine. Where I had characterized U.S. policy as “full speed ahead,” Markoff reported that the military “lags” in development of unmanned
ground vehicles, which, as you know, go by the great acronym of UGVs.

There isn’t really any contradiction between the facts as reported by Markoff and the history and analysis I gave, as I explained on my personal blog, but anybody who read the two casually, or only looked at the headlines, could be forgiven for thinking that Markoff had rebutted me, perhaps upholding the myth that there is some kind of a moratorium in effect.

In that blog post he mentions, Gubrud expands on the strangeness of the NYT article, or at least its headline. The headline in both the print and the online edition of Markoff’s article says that

the U.S. military “lags” in its pursuit of robotic ground vehicles. Lags… behind whom? China? North Korea? No, Markoff warns that the Pentagon is falling behind another aspiring superpower: Google.

Well worth reading the whole thing.

U.S. Policy on Robots in Warfare


“Atlas,” a humanoid robot built by Boston Dynamics and unveiled in 2013 as part of the “Robotics Challenge” sponsored by the U.S. military-research agency DARPA. [Source:
DARPA on YouTube
]

Our friend Mark Gubrud has a new
article in the Bulletin of the Atomic
Scientists
examining the U.S. Department of Defense’s policy regarding “autonomous
or semiautonomous weapon systems.” Gubrud, who wrote our
most controversial Futurisms post
a few years ago, brings together a wealth
of links and resources that will be of interest to anyone who wants to start
learning about the U.S. military’s real-life plans for killer robots.

Gubrud argues that a DOD directive put in place last year sends
a signal to military vendors that the Pentagon is interested in and supports the
development of autonomous weapons. He writes that, while the directive is vague
in some important respects, it pushes us further down the road to autonomous
killer robots. But, he says, it isn’t exactly clear why we should be on that
road at all: the arguments in favor of autonomous weapons are weak, and both
professional soldiers and the public at large object to them.

Gubrud is now a postdoc research associate at
Princeton, as well as a member of something called the International Committee for Robot Arms Control, an
organization that has Noel Sharkey, a prominent AI and robotics researcher and
commentator, as its chairman.

Manned Space Exploration Goes West: Oklahoma, OK!

Robotic space exploration is better than no space exploration at all, and the Mars Rovers have proven to be particularly remarkable machines. Those who made and manage them deserve to be proud. The latest news is that the Opportunity rover has, after seven years, traveled a total of a little over twenty miles, some 50 times its design distance.

That’s quite an impressive accomplishment, but it does help to suggest why manned exploration is likely to have real advantages over robotic vehicles (in the present case, a vehicle that is in fact manned at a distance) for some time to come. Let’s imagine that Opportunity, rather than a bunch of Englishmen, had arrived at Jamestown in 1607 and set out to explore the continent. At the rate of twenty miles every seven years, and assuming a good deal of counterfactual geography (i.e. the ability simply to travel as the crow flies) it would be approaching somewhere in the vicinity of Norman, Oklahoma about now.

It’s not just that humans can move faster and cover more ground while on the ground; someday we might cede that advantage to robots. Rather, the human advantage is to be found in the urgency of discovery and the call of the wild, in risk-taking and on-the-scene ingenuity. Such things drive us to press beyond the frontier of the moment. The next few years are unlikely to be kind to man in space, but we’ll know we have a serious manned space program when the astronauts check in with Mission Control whenever they damn please.

[Image: NASA/JPL-Caltech]

Can we control AI? Will we walk away?

While the Singularity Hub normally sticks to reporting on emerging technologies, their primary writer, Aaron Saenz, recently posted a more philosophical venture that ties nicely into the faux-caution trope of transhumanist discourse that was raised in our last post on Futurisms.

Mr. Saenz is (understandably) skeptical about efforts being made to ensure that advanced AI will be “friendly” to human beings. He argues that the belief that such a thing is possible is a holdover from the robot stories of Isaac Asimov. He joins in with a fairly large chorus of criticism of Asimov’s famous “Three Laws of Robotics,” although unlike many such critics he also seems to understand that in the robot stories, Asimov himself seemed to be exploring the consequences and adequacy of the laws he had created. But in any case, Mr. Saenz notes how we already make robots that, by design, violate these laws (such as military drones) — and how he is very dubious that intelligence so advanced as to be capable of learning and modifying its own programming could be genuinely restrained by mere human intelligence.
That’s a powerful combination of arguments, playing off one anticipated characteristic of advanced AI (self-modification) over another (ensuring human safety), and showing that the reality of how we use robots already does and will continue to trump idealistic plans for how we should use them. So why isn’t Mr. Saenz blogging for us? A couple of intriguing paragraphs tell the story.
As he is warming to his topic, Mr. Saenz provides an extended account of why he is “not worried about a robot apocalypse.” Purposefully rejecting one of the most well-known sci-fi tropes, he makes clear that he thinks that The Terminator, Battlestar Galactica, 2001, and The Matrix all got it wrong. How does he know they all got it wrong? Because these stories were not really about robots at all, but about the social anxieties of their times: “all these other villains were just modern human worries wrapped up in a shiny metal shell.”
There are a couple of problems here. First, what’s sauce for the goose is sauce for the gander: if all of these films are merely interesting as sociological artifacts, then it would only seem fair to notice that Asimov’s robot stories are “really” about race relations in the United States. But let’s let that go for now.
More interesting is the piece’s vanishing memory of itself. At least initially, advanced AI will exist in a human world, and will play whatever role it plays in relation to human purposes, hopes and fears. But when Mr. Saenz dismisses the significance of human worries about destructive robots, he is forgetting his own observation that human worries are already driving us towards the creation of robots that will deliberately not be bound by anything that would prevent them from killing a human being. Every generation of robots that human beings make will, of necessity, be human worries and aspirations trapped in a shiny metal shell. So it is not a foolish thing to try to understand the ways that the potential powers of robots and advanced AI might play an increasingly large role in the realm of human concerns, since human beings have a serious capacity for doing very dangerous things.
Mr. Saenz is perfectly aware of this capacity, as he indicates in his remarkable concluding thoughts:

We cannot control intelligence — it doesn’t work on humans, it certainly won’t work on machines with superior learning abilities. But just because I don’t believe in control, doesn’t mean that I’m not optimistic. Humanity has done many horrible things in the past, but it hasn’t wiped itself out yet. If machine intelligence proves to be a new form of Armageddon, I think we’ll be wise enough to walk away. If it proves to be benevolent, I think we’ll find a way to live with its unpredictability. If it proves to be all-consuming, I think we’ll find a way to become a part of it. I never bet against intelligence, even when it’s human.

Here, unfortunately, is the transhumanist magic wand in action, sprinkling optimism dust and waving away all problems. Yes, humans are capable of horrible things, but no real worry there. Why not? Because Mr. Saenz never bets against intelligence — examples of which would presumably include the intelligence that allows humans to do horrible things, and to, say, use AI to do them more effectively. And when worse comes to worst, we will “walk away” from Armageddon. Kind of like in Cormac McCarthy’s The Road, I suppose. That is not just whistling in the dark — it is whistling in the dark while wandering about with one’s eyes closed, pretending there is plenty of light.

Watson, Can You Hear Me? (The Significance of the “Jeopardy” AI win)

Yesterday, on Jeopardy!, a computer handily beat its human competitors. Stephen Gordon asks, “Did the Singularity Just Happen on Jeopardy?” If so, then I think it’s time for me and my co-bloggers to pack up and go home, because the Singularity is damned underwhelming. This was one giant leap for robot publicity, but only a small step for robotkind.

Unlike Deep Blue, the IBM computer that in 1997 defeated the world chess champion Garry Kasparov, I saw no indication that the Jeopardy! victory constituted any remarkable innovation in artificial intelligence methods. IBM’s Watson computer is essentially search engine technology with some basic natural language processing (NLP) capability sprinkled on top. Most Jeopardy! clues contain definite, specific keywords associated with the correct response — such that you could probably Google those keywords, and the correct response would be contained somewhere in the first page of results. The game is already very amenable to what computers do well.
In fact, Stephen Wolfram shows that you can get a remarkable amount of the way to building a system like Watson just by putting Jeopardy! clues straight into Google:
Once you’ve got that, it only requires a little NLP to extract a list of candidate responses, some statistical training to weight those responses properly, and then a variety of purpose-built tricks to accommodate the various quirks of Jeopardy!-style categories and jokes. Watching Watson perform, it’s not too difficult to imagine the combination of algorithms used.
Compiling Watson’s Errors
On that large share of search-engine-amenable clues, Watson almost always did very well. What’s more interesting to note is the various types of clues on which Watson performed very poorly. Perhaps the best example was the Final Jeopardy clue from the first game (which was broadcast on the second of three nights). The category was “U.S. Cities,” and the clue was “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Both of the human players correctly responded Chicago, but Watson incorrectly responded Toronto — and the audience audibly gasped when it did.
Watson performed poorly on this Final Jeopardy because there were no words in either the clue or the category that are strongly and specifically associated with Chicago — that is, you wouldn’t expect “Chicago” to come up if you were to stick something like this clue into Google (unless you included pages talking about this week’s tournament). But there was an even more glaring error here: anyone who knows enough about Toronto to know about its airports will know that it is not a U.S. city.
There were a variety of other instances like this of “dumb” behavior on Watson’s part. The partial list that follows gives a flavor of the kinds of mistakes the machine made, and can help us understand their causes.
  • With the category “Beatles People” and the clue “‘Bang bang’ his ‘silver hammer came down upon her head,’” Watson responded, “What is Maxwell’s silver hammer.” Surprisingly, Alex Trebek accepted this response as correct, even though the category and clue were clearly asking for the name of a person, not a thing.
  • With the category “Olympic Oddities” and the clue “It was the anatomical oddity of U.S. gymnast George Eyser, who won a gold medal on the parallel bars in 1904,” Watson responded, “What is leg.” The correct response was, “What is he was missing a leg.”
  • In the “Name the Decade” category, Watson at one point didn’t seem to know what the category was asking for. With the clue “Klaus Barbie is sentenced to life in prison & DNA is first used to convict a criminal,” none of its top three responses was a decade. (Correct response: “What is the 1980s?”)
  • Also in the category “Name the Decade,” there was the clue, “The first modern crossword puzzle is published & Oreo cookies are introduced.” Ken responded, “What are the twenties.” Trebek said no, and then Watson rang in and responded, “What is 1920s.” (Trebek came back with, “No, Ken said that.”)
  • With the category “Literary Character APB,” and the clue “His victims include Charity Burbage, Mad Eye Moody & Severus Snape; he’d be easier to catch if you’d just name him!” Watson didn’t ring in because his top option was Harry Potter, with only 37% confidence. His second option was Voldemort, with 20% confidence.
  • On one clue, Watson’s top option (which was correct) was “Steve Wynn.” Its second-ranked option was “Stephen A. Wynn” — the full name of the same person.
  • With the clue “In 2002, Eminem signed this rapper to a 7-figure deal, obviously worth a lot more than his name implies,” Watson’s top option was the correct one — 50 Cent — but its confidence was too low to ring in.
  • With the clue “The Schengen Agreement removes any controls at these between most EU neighbors,” Watson’s first choice was “passport” with 33% confidence. Its second choice was “Border” with 14%, which would have been correct. (Incidentally, it’s curious to note that one answer was capitalized and the other was not.)
  • In the category “Computer Keys” with the clue “A loose-fitting dress hanging from the shoulders to below the waist,” Watson incorrectly responded “Chemise.” (Ken then incorrectly responded “A,” thinking of an A-line skirt. The correct response was a “shift.”)
  • Also in “Computer Keys,” with the clue “Proverbially, it’s ‘where the heart is,’” Watson’s top option (though it did not ring in) was “Home Is Where the Heart Is.”
  • With the clue “It was 103 degrees in July 2010 & Con Ed’s command center in this N.Y. borough showed 12,963 megawatts consumed at 1 time,” Watson’s first choice (though it did have enough confidence to ring in) was “New York City.”
  • In the category “Nonfiction,” with the clue “The New Yorker’s 1959 review of this said in its brevity & clarity it is ‘unlike most such manuals, a book as well as a tool.’” Watson incorrectly responded “Dorothy Parker.” The correct response was “The Elements of Style.”
  • For the clue “One definition of this is entering a private place with the intent of listening secretly to private conversations,” Watson’s first choice was “eavesdropper,” with 79% confidence. Second was “eavesdropping,” with 49% confidence.
  • For the clue “In May 2010 5 paintings worth $125 million by Braque, Matisse & 3 others left Paris’ museum of this art period,” Watson responded, “Picasso.”
We can group these errors into a few broad, somewhat overlapping categories:
  • Failure to understand what type of thing the clue was pointing to, e.g. “Maxwell’s silver hammer” instead of “Maxwell”; “leg” instead of “he was missing a leg”; “eavesdropper” instead of “eavesdropping.”
  • Failure to understand what type of thing the category was pointing to, e.g.,“Home Is Where the Heart Is” for “Computer Keys”; “Toronto” for “U.S. cities.”
  • Basic errors in worldly logic, e.g. repeating Ken’s wrong response; considering “Steve Wynn” and “Stephen A. Wynn” to be different responses.
  • Inability to understand jokes or puns in clues, e.g. 50 Cent being “worth” “more than his name implies”; “he’d be easier to catch if you’d just name him!” about Voldemort.
  • Inability to respond to clues lacking keywords specifically associated with the correct respone, e.g. the Voldemort clue; “Dorothy Parker” instead of “The Elements of Style.”
  • Inability to correctly respond to complicated clues that involve inference and combining facts in subsequent stages, rather than combining independent associated clues; e.g. the Chicago airport clue.
What these errors add up to is that Watson really cannot process natural language in a very sophisticated way — if it did, it would not suffer from the category errors that marked so many of its wrong responses. Nor does it have much ability to perform the inference required to integrate several discrete pieces of knowledge, as required for understanding puns, jokes, wordplay, and allusions. On clues involving these skills and lacking search-engine-friendly keywords, Watson stumbled. And when it stumbled, it often seemed not just ignorant, but completely thoughtless.
I expect you could create an unbeatable Jeopardy! champion by allowing a human player to look at Watson’s weighted list of possible responses, even without the weights being nearly as accurate as Watson has them. While Watson assigns percentage-based confidence levels, any moderately educated human will be immediately be able to discriminate potential responses into the three relatively discrete categories “makes no sense,” “yes, that sounds right,” and “don’t know, but maybe.” Watson hasn’t come close to touching this.
The Significance of Watson’s Jeopardy! Win
In short, Watson is not anywhere close to possessing true understanding of its knowledge — neither conscious understanding of the sort humans experience, nor unconscious, rule-based syntactic and semantic understanding sufficient to imitate the conscious variety. (Stephen Wolfram’s post accessibly explains his effort to achieve the latter.) Watson does not bring us any closer, in other words, to building a Mr. Data, even if such a thing is possible. Nor does it put us much closer to an Enterprise ship’s computer, as many have suggested.
In the meantime, of course, there were some singularly human characteristics on display in the Jeopardy! tournament, and evident only in the human participants. Particularly notable was the affability, charm, and grace of Ken Jennings and Brad Rutter. But the best part was the touches of genuine, often self-deprecating humor by the two contestants as they tried their best against the computer. This culminated in Ken Jennings’s joke on his last Final Jeopardy response:
Nicely done, sir. The closing credits, which usually show the contestants chatting with Trebek onstage, instead showed Jennings and Rutter attempting to “high-five” Watson and show it other gestures of goodwill:
I’m not saying it couldn’t ever be done by a computer, but it seems like joking around will have to be just about the last thing A.I. will achieve. There’s a reason Mr. Data couldn’t crack jokes. Because, well, humor — it is a difficult concept. It is not logical. All the more reason, though, why I can’t wait for Saturday Night Live’s inevitable “Celebrity Jeopardy” segment where Watson joins in with Sean Connery to torment Alex Trebek.

Progress in Robotics and AI: The Coming Demise of “Jeopardy”

With some irony, I expect, Gizmodo gave the following headline to a story this week about a rudimentary sprinting robot: “Someday, this robot will run faster than us all.” This week also brings the news that in a couple of months we will have a chance to see if IBM has made a champion artificially-intelligent Jeopardy player. I for one do not doubt that eventually, robots — maybe even the same robot — will be able to run faster than us all and win at Jeopardy and cook my dinner or at least provide me with a recipe that will use all the stray leftovers in my refrigerator. And then what will AI and robotics researchers do?

A hint to answering this question can be found by going to the IBM Research home page and putting in the search term “Deep Blue,” the name of the company’s chess-playing computer that famously beat World Chess Champion Garry Kasparov. The first results take you to what seem to be orphaned Web pages from 1997. Eventually you reach a page that acknowledges that the team has moved on to other projects. So too with the MIT Media Lab Personal Robotics Group which abounds in aspirational descriptions and videos, but seems short on actual results that conform to those aspirations. Has the teddy-bear robot called “Huggable” in fact been turned, as its makers expected, into a communication avatar, an early education companion, or a therapeutic companion? One would be hard-pressed to know.

My guess is that graduate students graduate and funding opportunities change. And some questions get answered, or perhaps not; in either case researchers move on, maybe building on what they have done, maybe moving in a new direction entirely. Doubtless, as in any other kind of research, there are times when the results have a nearly immediate impact in the wider world, or eventually get filtered into products and processes that we come to take for granted. But in these academic fields, as in all others, it looks to me like a good deal of what gets done amounts to lines, sometimes very expensive lines, on a C.V.

For those of us who observe this world from the outside, knowing it works this way provides two cautionary lessons. First, there is not necessarily a great idea or accomplishment behind every great-sounding press release or polished website. No surprise there, I hope. Second, it usually takes some time to judge the full impact of the new knowledge and abilities that we gain in these kinds of research programs. If IBM’s “Watson” program wins its Jeopardy match, we will doubtless be treated to a good deal of speculation about what it means — I might be tempted to engage in some myself. But the best response will still probably be that we can only wait and see. That’s good, because time is a useful thing for us slow-thinking humans. But it is also problematic, as the frog in the slowly warming pan of water eventually finds out.

[Photo via MGM Television via Curt Alliaume.]