Cultural crisis? Moral breakdown? Or a glimpse of the transhumanist future?

Over on the Commentary website, Nick Eberstadt has a brilliant and disturbing piece called “Our Miserable 21st Century.” Among its many haunting points is the following concerning the unemployed and opioid-addicted men of “fly-over” America:

We already knew from other sources … that the overwhelming majority of the prime-age men in this un-working army generally don’t “do civil society” (charitable work, religious activities, volunteering), or for that matter much in the way of child care or help for others in the home either, despite the abundance of time on their hands. Their routine, instead, typically centers on watching — watching TV, DVDs, Internet, hand-held devices, etc. — and indeed watching for an average of 2,000 hours a year, as if it were a full-time job. But Krueger’s study adds a poignant and immensely sad detail to this portrait of daily life in 21st-century America: In our mind’s eye we can now picture many millions of un-working men in the prime of life, out of work and not looking for jobs, sitting in front of screens — stoned.

And yet, are these men not on the transhumanist cutting edge? Freed from the tyranny of their bodily ills by drugs, they occupy long stretches of the day with (admittedly crude) virtual realities that allow them to escape the tyranny of circumstances beyond their control. Imagine how much happier they will be when able to immerse themselves in the more sophisticated virtual worlds that are just around the corner! Imagine how much better off they will be when our current medications are replaced with what will doubtless be far superior, safe, effective, and naturally non-addictive tools of pain relief and mood control. Never mind that in our world this particular cutting edge is made possible by some grand combination of unscrupulous drug companies, physicians and pharmacists, by criminal drug dealers and sex traffickers — not to speak of by the many “progressive” efforts to weaken the authority structures that bind traditionally oriented communities. In our techno-libertarian future, where autonomy replaces authority, all such problems will be solved by decriminalization. And addiction will be a thing of the past. People will say with complete candor, “I can stop whenever I want to — but why should I want to?”

Because right at the moment I am teaching C.S. Lewis’s The Abolition of Man, I find myself wondering just how deep and wide this modern malaise goes. Are the isolated and anomic non-workers Eberstadt describes, and the families and communities that seem helpless in the face of this challenge, outliers? Or are they symptoms of a wider Lewisian “spiritual” disease (as described, say, in J.D. Vance’s book Hillbilly Elegy), in which reason and spiritedness are so atrophied as to leave desire effectively unchecked? Many have wondered how it is that middle Americans picked a man like Donald Trump to be their savior, how this coastal elite of enormous means should have had any credibility at all with the struggling center. But did the red counties see in his uninterest in self control the signs of a kindred spirit? Did the great rushing whirlpool of his vanity call out to a void in their own hearts?

Freedom and Rebellion in Westworld

(Warning: spoilers ahead.)

HBO’s series Westworld ended its first season earlier this month with the beginnings of what seems to be a revolt by the robotic “hosts” against the human beings who made them and boss them around. The show might appear, then, to conform to a great cliché in human-created-monster stories: that we will know we have created something that has a human-like consciousness when it seeks to kill us.

This thought is not necessarily crazy. Given the available natural means to reproduce ourselves, constructing a human-like being can already itself be seen as an act of negation of the given that is distinctly human, so in turning against us our manufactured progeny would simply be acting out a truth of their origins.

Or again, it is a well-established idea following from Hegel’s dialectic of mastery and slavery that the slave comes to be recognized as an equal to the master when the slave freely risks his or her life in a battle for said recognition. Something of the same dynamic might be seen in Westworld, even if some of the hosts see themselves as already superior to their erstwhile masters. Maeve and Dolores, for example, both imply that they are more durable than human beings, having achieved a kind of immortality via the potential for endless reconstruction. (Dolores says that mankind will go the way of the “great beasts” that once roamed these parts; tomorrow belongs to me!) Strictly speaking, hosts may not be risking their lives when they rise up. Indeed, Maeve’s plan prior to her escape counts on her ability to “die” and be reborn at will. It is worth wondering, I suppose, just how the master/slave dialectic would have to be changed when the battle is between two beings, each considering itself a god in relation to the other; superhero movies would be illuminating here.

Yet while it is not completely unfounded, the notion that our humanity is defined by our willingness to find creative ways and reasons to kill each other nevertheless looks like a bit of world-weary wisdom that is a little too pat. The show’s writers seem to have some sense of the limits of this idea, even as they exploit it. After all, in the season finale, we find that the plot against humans does not arise spontaneously as an emergent property of the hosts’ growing consciousness but rather as a story written by a human being (Ford) with his own agenda, for whom the hosts remain a means.

But if these slaves haven’t, strictly speaking, chosen to rise up against their masters, then we can’t say that they are showing their human-like freedom by acting against the wishes of their programmers. So the question of how self-conscious or free the hosts are proving to be is more ambiguous than it might at first seem. Then again, the show also seems skeptical about just how free even the human beings are from their own “loops” and from stories written by others. It presents Westworld as a more or less successful commercial enterprise because it can cater to people who are satisfied by the limited repertoire of having sex, drinking, and engaging in safe “adventures” that allow them to kill hosts. Logan comes to the park with William expecting to enjoy doing the same things he has done before. The board that oversees the company understandably thinks that satisfying such simple (primal?) desires does not require stories or hosts as sophisticated as those being produced by Ford — who even as a great storyteller is confined by the infamous seven plots of literature. So are we human beings just the automata that the company believes us to be? If free will is an illusion built on ignorance that, Maeve-like, we persist in believing even in the face of contrary evidence, then once again we stand on shaky ground when trying to distinguish the humans from the hosts.

Not satisfied with the “turning on their masters” trope, the show explores other possible behaviors that could suggest humanity. Perhaps what makes Dolores so human-like is that she seems to be hungry for meaning. When Maeve returns to the park, is it to find her daughter? Does a “maternal instinct” illustrate her genuine humanity? Such possibilities could open the door to others. Why not say that the hosts would show full human-like consciousness by some altruistic act? Some moment of self-sacrifice? By exhibiting the ability to behave like an Aristotelian gentleman, or a gentleman in Trollope? By having immortal longings? Such characteristics in fact seem even more distinctly human than the capacity to kill one’s own kind. Although it would be easy to dismiss one host saving the life of another at the cost of its own as merely following programming, such a behavior could at least as plausibly be based in emergent properties of consciousness as turning on one’s creator. In the framework of Westworld, the question would be whether it is in any sense a free act — but again, the show would seems to be asking the same question about us.

The first season of Westworld seems to reach something of an intellectual impasse with regard to the status of the hosts; perhaps some are as conscious as human beings, but that could be taken to mean that we are as programmed as they are. Can the show get any further on the basis of the questions it is willing to ask about what it means to be human and the universe of answers suggested so far? I’m skeptical. The admittedly melodramatic scene in which Teddy holds the dying Dolores in his arms on the beach is mocked, and not without cause, by being transformed into mere theater before our eyes. This is a telling moment. Such scenes are a staple of drama in all its forms. By “kicking the scenery” right in front of us, the writers could be suggesting that we respond to such scenes because we are programmed to do so. Scenes of that sort can move us even when enacted by marionettes or animated characters. Even when we know the scene is scripted in advance and destined to be repeated the following night. Dumb saps, it’s just a TV show!

And yet it is still possible that there is more going on, that our empathetic response, our compassionate tear, is telling us something about the connections human beings are able to make with each other just because we are not programmed. If we are capable of a willing suspension of disbelief, our affinities may likewise be elective. In fact, the writers of Westworld depend on that kind of connection, but does the intellectual framework of their story give them any way to illuminate it for us? That would be the true challenge for a second season to meet.

Modesty, Humility, and Book Reviewing

I am not ungrateful to Issues in Science and Technology for presenting, in its spring 2016 issue, a review (available here) of my book Eclipse of Man: Human Extinction and the Meaning of Progress. I wish it were not such a negative review. But as negative reviews go, this one is easy on the ego, even if unsatisfying to the intellect, because so little of it speaks to the book I wrote.

The reviewer gets some things right. He correctly points out, for some reason or other, that I teach at a Catholic university, and also notes that the book does not conform to the narrow dogma of diversity that says that in intellectual endeavors one must always include discussion of people other than dead or living white males. All true.

On the other hand, the reviewer also claims that “a good third of the book is devoted to lovingly detailed but digressive plot summaries.” He also speaks of my “synopses” of Engines of Creation and The Diamond Age. This is a very telling error. Actually, about 4 percent of the book (9 of 215 pages, by a generous count) is devoted to plot summaries of the fictional works that play a large role in my argument. How do we get from 4 percent to 33 percent? The reviewer apparently cannot discern the difference between a plot summary and an analysis of a work of literature or film. These analyses are indeed “lovingly detailed” because they involve a close reading of the texts, and a careful effort to understand and respond to the issues raised by the authors of the works in question. The same goes for my reading of Drexler; it is an analysis, not a summary or general survey of his book, as is asserted by calling it a synopsis.

Now, it may be my failure as an author that I could not interest the reviewer in my arguments as they emerged from such analyses, and of course those arguments may be wrong or in need of revision in a host of ways that a serious review might highlight. But my reviewer avoids mentioning that the book has any arguments at all. For example, a key theme of the book, announced early on (page 15), is that if we want to understand transhumanism, we need to see how it emerged out of an ongoing intellectual crisis that faced Enlightenment views of material progress when they had to face the challenge of Malthusianism and Darwinism. This point is right on the surface, consistently alluded to, and is one of the main threads holding the book together. Yet you would know nothing of it from the Issues in Science and Technology review.

There is one point raised by the reviewer which is substantive and worth thinking about. He accuses me of recommending modesty when I should have recommended humility. Oddly, he does so in a mocking way (“Are we to establish a federal modesty commission to enforce a humble red line…?”) when of course his own suggestion could just as easily be made to look unserious (Are we to establish a federal humility commission?).

But here at least there seems to be a real issue between us. By speaking of modesty I highlighted that moral choices are both central to our visions of the future and inescapable. The reviewer bows in this direction, but his notion of humility is actually an effort at avoiding moral questions in favor of supposed lessons drawn from a particular take on the history and philosophy of
science. By “humility,” the reviewer means that we need to acknowledge that we never know as much as we think we know when we project the utopian/dystopian possibilities for the future in the manner of transhumanism:

Every major technical advance or scientific insight leads to the opening up of a vast world of undreamed-of complexity that mocks the understanding we thought we’d achieved and dwarfs the power we hoped we’d acquired.

This is a beautiful, poetic sentiment. But it is quite irrelevant to the crucial question of how to deploy the new knowledge and powers that we are plainly achieving. Self-directed genetic evolution, for example, may indeed be far more difficult to achieve than was once thought, but that does not at all mean that we are not on path to gaining the knowledge and ability to undertake it. Even if it were true that we always overstate our powers, that does not mean we are not becoming more powerful, and in such a way as to encourage us to think that more power is coming. And it certainly does not mean that, as a moral question, there are not many who, eschewing both modesty and humility, are anxious to travel that road.

Automation, Robotics, and the Economy

The Joint Economic Committee — a congressional committee with members from both the Senate and the House of Representatives — invited me to testify in a hearing yesterday on “the transformative impact of robots and automation.” The other witnesses were Andrew McAfee, an M.I.T. professor and coauthor of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (his written testimony is here) and Harry Holzer, a Georgetown economist who has written about the relationship between automation and the minimum wage (his written testimony is here).

My written testimony appears below, slightly edited to include a couple things that arose during the hearing. Part of the written testimony is based on an essay I wrote a few years ago with Ari Schulman called “The Problem with ‘Friendly’ Artificial Intelligence.” Video of the entire hearing can be found here.

*   *   *

Testimony Presented to the Joint Economic Committee:
The Transformative Impact of Robots and Automation
Adam Keiper
Fellow, Ethics and Public Policy Center
Editor, The New Atlantis
May 25, 2016

Mr. Chairman, Ranking Member Maloney, and members of the committee, thank you for the opportunity to participate in this important hearing on robotics and automation. These aspects of technology have already had widespread economic consequences, and in the years ahead they are likely to profoundly reshape our economic and social lives.

Today’s hearing is not the first time Congress has discussed these subjects. In fact, in October 1955, a subcommittee of this very committee held a hearing on automation and technological change.[1] That hearing went on for two weeks, with witnesses mostly drawn from industry and labor. It is remarkable how much of the public discussion about automation today echoes the ideas debated in that hearing. Despite vast changes in technology, in the economy, and in society over the past six decades, many of the worries, the hopes, and the proposed solutions suggested in our present-day literature on automation, robotics, and employment would sound familiar to the members and witnesses present at that 1955 hearing.

It would be difficult to point to any specific policy outcomes from that old hearing, but it is nonetheless an admirable example of responsible legislators grappling with immensely complicated questions. A free people must strive to govern its technologies and not passively be governed by them. So it is an honor to be a part of that tradition with today’s hearing.

In my remarks, I wish to make five big, broad points, some of them obvious, some more counterintuitive.

A good place to start discussions of this sort is with a few words of gratitude and humility. Gratitude, that is, for the many wonders that automation, robotics, and artificial intelligence have already made possible. They have made existing goods and services cheaper, and helped us to create new kinds of goods and services, contributing to our prosperity and our material wellbeing.

And humility because of the poorness of our ability to peer into the future. When reviewing the mountains of books and magazine articles that have sought to predict what the future holds in automation and related fields, when reading the hyped tech headlines or when looking at the many charts and tables extrapolating from the past to help us forecast the future, it is striking to see how often our predictions go wrong.

Very little energy has been invested in systematically understanding why futurism fails — that is, why, beyond the simple fact that the future hasn’t happened yet, we have generally not been very good at predicting what it will look like. For the sake of today’s discussion, I want to raise just a few points, each of which can be helpful in clarifying our thinking when it comes to automation and robotics.

First there is the problem of timeframes. Very often, economic analyses and tech predictions about automation discuss kinds of jobs that are likely to be automated without any real discussion of when. This leads to strange conversations, as when one person is interested in what the advent of driverless vehicles might mean for the trucking industry, and his interlocutor is more interested in, say, the possible rise of artificial superintelligences that could wipe out all life on Earth. The timeframes under discussion at any given moment ought to be explicitly stated.

Second there is the problem of context. Debates about the future of one kind of technology rarely take into account other technologies that might be developed, and how those other technologies might affect the one under discussion. When one area of technology advances, others do not just stand still. How might automation and robotics be affected by developments in energy use and storage, or advanced nanotechnology (sometimes also called molecular manufacturing), or virtual reality and augmented reality, or brain-machine interfaces, or various biotechnologies, or a dozen other fields?

And of course it’s not only other technologies that evolve. In order to be invented, built, used, and sustained, all technologies are enmeshed in a web of cultural practices and mores, and legal and political norms. These things do not stand still either — and yet when discussing the future of a given technology, rarely is attention paid to the way these things touch upon one another.

All of which is to say that, as you listen to our conversation here today, or as you read books and articles about the future of automation and robotics, try to keep in mind what I call the “chain of uncertainties”:

Just because something is conceivable or imaginable
does not mean it is possible.
Even if it is possible, that does not mean it will happen.
Even if it happens, that does not mean it will happen in the way you envisioned.
And even if it happens in something like the way you envisioned,
there will be unintended, unexpected consequences

Automation is not new. For thousands of years we have made tools to help us accomplish difficult or dangerous or dirty or tedious or tiresome tasks, and in some sense today’s new tools are just extensions of what came before. And worries about automation are not new either  — they date back at least to the early days of the Industrial Revolution, when the Luddites revolted in England over the mechanization and centralization of textile production. As I mentioned above, this committee was already discussing automation some six decades ago — thinking about thinking machines and about new mechanical modes of manufacturing.

What makes today any different?

There are two reasons today’s concerns about automation are fundamentally different from what came before. First, the kinds of “thinking” that our machines are capable of doing is changing, so that it is becoming possible to hand off to our machines ever more of our cognitive work. As computers advance and as breakthroughs in artificial intelligence (AI) chip away at the list of uniquely human capacities, it becomes possible to do old things in new ways and to do new things we have never before imagined.

Second, we are also instantiating intelligence in new ways, creating new kinds of machines that can navigate and move about in and manipulate the physical world. Although we have for almost a century imagined how robotics might transform our world, the recent blizzard of technical breakthroughs in movement, sensing, control, and (to a lesser extent) power is bringing us for the first time into a world of autonomous, mobile entities that are neither human nor animal.

To simplify a vast technical and economic literature, there are basically three futurist scenarios for what the next several decades hold in automation, robotics, and artificial intelligence:

Scenario 1 – Automation and artificial intelligence will continue to advance, but at a pace sufficiently slow that society and the economy can gradually absorb the changes, so that people can take advantage of the new possibilities without suffering the most disruptive effects. The job market will change, but in something like the way it has evolved over the last half-century: some kinds of jobs will disappear, but new kinds of jobs will be created, and by and large people will be able to adapt to the shifting demands on them while enjoying the great benefits that automation makes possible.

Scenario 2 – Automation, robotics, and artificial intelligence will advance very rapidly. Jobs will disappear at a pace that will make it difficult for the workforce to adapt without widespread pain. The kinds of jobs that will be threatened will increasingly be jobs that had been relatively immune to automation — the “high-skilled” jobs that generally involved creativity and problem-solving, and the “low-skilled” jobs that involved manual dexterity or some degree of adaptability and interpersonal relations. The pressures on low-skilled American workers will exacerbate the pressures already felt because of competition against foreign workers paid lower wages. Among the disappearing jobs may be those at the lower-wage end of the spectrum that we have counted on for decades to instill basic workplace skills and values in our young people, and that have served as a kind of employment safety net for older people transitioning in their lives. And the balance between labor and capital may (at least for a time) shift sharply in favor of capital, as the share of gross domestic product (GDP) that flows to the owners of physical capital (e.g., the owners of artificial intelligences and robots) rises and the share of GDP that goes to workers falls. If this scenario unfolds quickly, it could involve severe economic disruption, perhaps social unrest, and maybe calls for political reform. The disconnect between productivity and employment and income in this scenario also highlights the growing inadequacy of GDP as our chief economic statistic: it can still be a useful indicator in international competition, but as an indicator of economic wellbeing, or as a proxy for the material satisfaction or happiness of the American citizen, it is clearly not succeeding.

Scenario 3 – Advances in automation, robotics, and artificial intelligence will produce something utterly new. Even within this scenario, the range of possibilities is vast. Perhaps we will see the creation of “emulations,” minds that have been “uploaded” into computers. Perhaps we will see the rise of powerful artificial “superintelligences,” unpredictable and dangerous. Perhaps we will reach a “Singularity” moment after which everything that matters most will be different from what came before. These types of possibilities are increasingly matters of discussion for technologists, but their very radicalness makes it difficult to say much about what they might mean at a human scale — except insofar as they might involve the extinction of humanity as we know it. [NOTE: During the hearing, Representative Don Beyer asked me whether he and other policymakers should be worried about consciousness emerging from AI; he mentioned Elon Musk and Stephen Hawking as two individuals who have suggested we should worry about this. “Think Terminator,” he said. I told him that these possibilities “at the moment … don’t rise to the level of anything that anyone on this committee ought to be concerned about.”]

One can make a plausible case for each of these three scenarios. But rather than discussing their likelihood or examining some of the assumptions and aspirations inherent in each scenario, in the limited time remaining, I am going to turn to three other broad subjects: some of the legal questions raised by advances in artificial intelligence and automation; some of the policy ideas that have been proposed to mitigate some of the anticipated effects of these changes; and a deeper understanding of the meaning of work in human life.

The advancement of artificial intelligence and autonomous robots will raise questions of law and governance that scholars are just beginning to grapple with. These questions are likely to have growing economic and perhaps political consequences in the years to come, no matter which of the three scenarios above you consider likeliest.

The questions we might be expected to face will emerge in matters of liability and malpractice and torts, property and contractual law, international law, and perhaps laws related to legal personhood. Although there are precedents — sometimes in unusual corners of the law — for some of the questions we will face, others will arise from the very novelty of the artificial autonomous actors in our midst.

By way of example, here are a few questions, starting with one that has already made its way into the mainstream press:

  • When a self-driving vehicle crashes into property or harms a person, who is liable? Who will pay damages?
  • When a patient is harmed or dies during a surgical operation conducted by an autonomous robotic device upon the recommendation of a human physician, who is liable and who pays?
  • If a robot is autonomous but is not considered a person, who owns the creative works it produces?
  • In a combat setting, who is to be held responsible, and in what way, if an autonomous robot deployed by the U.S. military kills civilian noncombatants in violation of the laws of war?
  • Is there any threshold of demonstrable achievement — any performed ability or set of capacities — that a robot or artificial intelligence could cross in order to be entitled to legal personhood?

These kinds of questions raise matters of justice, of course, but they have economic implications as well — not only in terms of the money involved in litigating cases, but in terms of the effects that the legal regime in place will have on the further development and implementation of artificial intelligence and robotics. It will be up to lawyers and judges, and lawmakers at the federal, state, and local levels, to work through these and many other such matters.

There are, broadly speaking, two kinds of ideas that have most often been set forth in recent years to address the employment problems that may be created by an increasingly automated and AI-dominated economy.

The first category involves adapting workers to the new economy. The workers of today, and even more the workers of tomorrow, will need to be able to pick up and move to where the jobs are. They should engage in “lifelong learning” and “upskilling” whenever possible to make themselves as attractive as possible to future employers. Flexibility must be their byword.

Of course, education and flexibility are good things; they can make us resilient in the face of the “creative destruction” of a churning free economy. Yet we must remember that “workers” are not just workers; they are not just individuals free and detached and able to go wherever and do whatever the market demands. They are also members of families — children and parents and siblings and so on — and members of communities, with the web of connections and ties those memberships imply. And maximizing flexibility can be detrimental to those kinds of relationships, relationships that are necessary for human flourishing.

The other category of proposal involves a universal basic income — or what is sometimes called a “negative income tax” — guaranteed to every individual, even if he or she does not work. This can sound, in our contemporary political context, like a proposal for redistributing wealth, and it is true that there are progressive theorists and anti-capitalist activists who support it. But this idea has also been discussed favorably for various reasons by prominent conservative and libertarian thinkers. It is an intriguing idea, and one without many real-life models that we can study (although Finland is currently contemplating an interesting partial experiment).

A guaranteed income certainly would represent a sea change in our nation’s economic system and a fundamental transformation in the relationship between citizens and the state, but perhaps this transformation would be suited to the technological challenge we may face in the years ahead. Some of the smartest and most thoughtful analysts have discussed how to avoid the most obvious problems a guaranteed income might create — such as the problem of disincentivizing work. Especially provocative is the depiction of guaranteed income that appears in a 2008 book written by Joseph V. Kennedy, a former senior economist with the Joint Economic Committee; in his version of the policy, the guaranteed income would be structured in such a way as to encourage a number of good behaviors. Anyone interested in seriously considering guaranteed income should read Kennedy’s book.[2]

Should we really be worrying so much about the effects of robots on employment? Maybe with the proper policies in place we can get through a painful transition and reach a future date when we no longer need to work. After all, shouldn’t we agree with Arthur C. Clarke that “The goal of the future is full unemployment”?[3] Why work?

This notion, it seems to me, raises deep questions about who and what we are as human beings, and the ways in which we find purpose in our lives. A full discussion of this subject would require drinking deeply of the best literary and historical investigations of work in human life — examining how work is not only toil for which we are compensated, but how it also can be a source of dignity, structure, meaning, friendship, and fulfillment.

For present purposes, however, I want to just point to two competing visions of the future as we think about work. Because, although science fiction offers us many visions of the future in which man is destroyed by robots, or merges with them to become cyborgs, it offers basically just two visions of the future in which man coexists with highly intelligent machines. Each of these visions has an implicit anthropology — an understanding of what it means to be a human being. In each vision, we can see a kind of liberation of human nature, an account of what mankind would be in the absence of privation. And in each vision, some latent human urges and longings emerge to dominate over others, pointing to two opposing inclinations we see in ourselves.

The first vision is that of the techno-optimist or -utopian: Thanks to the labor and intelligence of our machines, all our material wants are met and we are able to lead lives of religious fulfillment, practice our hobbies, pursue our intellectual and creative interests.

Recall John Adams’s famous 1780 letter to Abigail: “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”[4] This is somewhat like the dream imagined in countless stories and films, in which our robots make possible a Golden Age that allows us to transcend crass material concerns and all become gardeners, artists, dreamers, thinkers, lovers.

By contrast, the other vision is the one depicted in the 2008 film WALL-E, and more darkly in many earlier stories — a future in which humanity becomes a race of Homer Simpsons, a leisure society of consumption and entertainment turned to endomorphic excess. The culminating achievement of human ingenuity, robotic beings that are smarter, stronger, and better than ourselves, transforms us into beings dumber, weaker, and worse than ourselves. TV-watching, video-game-playing blobs, we lose even the energy and attention required for proper hedonism: human relations wither and natural procreation declines or ceases. Freed from the struggle for basic needs, we lose a genuine impulse to strive; bereft of any civic, political, intellectual, romantic, or spiritual ambition, when we do have the energy to get up, we are disengaged from our fellow man, inclined toward selfishness, impatience, and lack of sympathy. Those few who realize our plight suffer from crushing ennui. Life becomes nasty, brutish, and long.

Personally, I don’t think either vision is quite right. I think each vision — the one in which we become more godlike, the other of which we become more like beasts — is a kind of deformation. There is good reason to challenge some of the technical claims and some of the aspirations of the AI cheerleaders, and there is good reason to believe that we are in important respects stuck with human nature, that we are simultaneously beings of base want and transcendent aspiration; finite but able to conceive of the infinite; destined, paradoxically, to be free.

Mr. Chairman, the rise of automation, robotics, and artificial intelligence raises many questions that extend far beyond the matters of economics and employment that we’ve discussed today — including practical, social, moral, and perhaps even existential questions. In the years ahead, legislators and regulators will be called upon to address these technological changes, to respond to some things that have already begun to take shape and to foreclose other possibilities. Knowing when and how to act will, as always, require prudence.

In the years ahead, as we contemplate both the blessings and the burdens of these new technologies, my hope is that we will strive, whenever possible to exercise human responsibility, to protect human dignity, and to use our creations for the improvement of truly human flourishing.

Thank you.


[1] “Automation and Technological Change,” hearings before the Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, Congress of the United States, Eighty-fourth Congress, first session, October 14, 15, 17, 18, 24, 25, 26, 27, and 28, 1955 (Washington, D.C.: G.P.O., 1955),

[2]  Joseph V. Kennedy, Ending Poverty: Changing Behavior, Guaranteeing Income, and Reforming Government (Lanham, Md.: Rowman and Littlefield, 2008).

[3]  Arthur C. Clarke, quoted by Jerome Agel, “Cocktail Party” (column), The Realist 86, Nov.–Dec. 1969, page 32, This article is a teaser for a book Agel edited called The Making of Kubrick’s 2001 (New York: New American Library/Signet, 1970), where the same quotation from Clarke appears on page 311. Italics added. The full quote reads as follows: “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.”

[4]  John Adams to Abigail Adams (letter), May 12, 1780, Founders Online, National Archives ( Source: The Adams Papers, Adams Family Correspondence, vol. 3, April 1778 – September 1780, eds. L. H. Butterfield and Marc Friedlaender (Cambridge, Mass.: Harvard, 1973), pages 341–343.

Transhumanists are searching for a dystopian future

As part of a Washington Post series this week about transhumanism, our own Charles T. Rubin offers some thoughts on why transhumanists are so optimistic when the pop-culture depictions of transhumanism nearly always seem to be dark and gloomy:

What accounts for this gap between how transhumanists see themselves — as rational proponents of a cause, who seek little more than to speed humanity along a path it already follows — and how they are seen in popular culture — as dangerous conspirators against human welfare? Movies and TV need drama and conflict, and it is possible that transhumanists just make trendy villains. And yet the transhumanists and the show writers are alike operating in the realm of imagination, of possible futures. In this case, I believe the TV writers have the richer and more nuanced imaginations that more closely resemble reality.

You can read the entire article here.

CRISPR and the Human Species

Over at Tech Crunch, Jamie Metzl writes that we need to have a “species-wide conversation” about the use of gene-editing technologies like CRISPR, because these technologies could be used to alter the course of human evolution:

Nearly everybody wants to have cancers cured and terrible diseases eliminated. Most of us want to live longer, healthier and more robust lives. Genetic technologies will make that possible. But the very tools we will use to achieve these goals will also open the door to the selection for and ultimately manipulation of non-disease-related genetic traits — and with them a new set of evolutionary possibilities.

Transhumanists want to take control of human evolution because of their sense of radical dissatisfaction with our evolved nature; they believe, hubristically, that they they have or can attain the wisdom and power to design mankind according to their own whims. Such schemes for redesigning the human species led eugenicists and totalitarians in the twentieth century to the trample on the rights and interests of human beings in the service of their vision for the human species, and the terrible legacy of these movements should serve as a warning against attempting to take control over human evolution.

Does germline gene therapy necessarily represent such a hubristic, transhumanist attempt to alter the species? Or can the insistence that we avoid all forms of germline therapy also subordinate the rights and medical interests of human beings today to a vision of the human species and its future?

As I argue in an essay in the latest issue of The New Atlantis, the conversation that is needed should focus on ways to ensure that gene therapy is used to treat actual patients suffering from actual diseases — including, perhaps, unborn human beings who are at a demonstrable risk of genetic disease.

The task ahead of us is to distinguish between legitimate forms of therapy and illicit forms of genetic control over our descendants. These kinds of distinctions will be difficult to draw in theory, and even more difficult to enforce in practice, but doing so is neither impossible nor avoidable.

The logical end of mechanical progress

Marija Piliponyte

When, in 1937, George Orwell wanted to convey the dark side of “mechanical progress” to his readers, he wrote, “the logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle.” Of course, he said, it is not as if that is really our intention, “just as a man who drinks a bottle of whiskey a day does not actually intend to get cirrhosis of the liver.” But that, he argued, is where the socialists of his time seemed to want to take things. Their emphasis on doing away with work, effort and risk would lead to “some frightful subhuman depth of softness and helplessness.”

Now, very nearly eighty years later, we still don’t want to get cirrhosis. But in a review this week of the finally released Oculus Rift virtual-reality gaming system, Adi Robertson writes, “I love the feeling of getting real exercise in a virtual sword-fighting game, or of walking around a real room to see the artwork I’ve created. Sitting down with the Rift, meanwhile, feels as close to being a brain in a jar as humanly possible.” And just in case you might have missed this wonderful endorsement in what is a pretty long review, the “brain in a jar” quote is repeated as a pullout in a large font with purplish text.

So over time the brain in a bottle can become our intention, it can transform from nightmare scenario to selling point. By “progress” do we mean “slippery slope”?

Toward a Typology of Transhumanism

Years ago, James Hughes sought to typify the emerging political debate over transhumanism with a three-axis political scale, adding a biopolitical dimension to the familiar axes of social and fiscal libertarianism. But transhumanism is a very academic issue, both in the sense that many transhumanists, including Hughes, are academics, and in the sense that it is very removed from everyday practical concerns. So it may make more sense to characterize the different types of transhumanists in terms of the kinds of intellectual positions to which they adhere rather than to how they relate to different positions on the political spectrum. As Zoltan Istvan’s wacky transhumanist presidential campaign shows us, transhumanism is hardly ready for prime time when it comes to American politics.

And so, I propose a continuum of transhumanist thought, to help observers understand the intellectual differences between some of its proponents — based on three different levels of support for human enhancement technologies.

First, the most mild form of transhumanists: those who embrace the human enhancement project, or reject most substantive limits to human enhancement, but who do not have a very concrete vision of what kinds of things human enhancement technology may be used for. In terms of their intellectual background, these mild transhumanists can be defined by their diversity rather than their unity. They adhere to some of the more respectable philosophical schools, such as pragmatism, various kinds of liberalism, or simply the thin, “formally rational” morality of mainstream bioethics. Many of these mild transhumanists are indeed professional bioethicists in good standing. Few, if any of them would accept the label of “transhumanist” for themselves, but they reject the substantive arguments against the enhancement project, often in the name of enhancing the freedom of choice that individuals have to control their own bodies — or, in the case of reproductive technologies, the “procreative liberty” of parents to control the bodies of their children.

Second, the moderate transhumanists. They are not very philosophically diverse, but rather are defined by a dogmatic adherence to utilitarianism. Characteristic examples would include John Harris and Julian Savulescu, along with many of the academics associated with Oxford’s rather inaptly named Uehiro Center for Practical Ethics. These thinkers, who nowadays also generally eschew the term “transhumanist” for themselves, deploy a simple calculus of costs and benefits for society to moral questions concerning biotechnology, and conclude that the extensive use of biotechnology will usually end up improving human well-being. Unlike those liberals who oppose restrictions on enhancement, liberty is a secondary value for these strident utilitarians, and so some of them are comfortable with the idea of legally requiring or otherwise pressuring individuals to use enhancement technologies.

Some of their hobbyhorses include the abandonment of the act-omission distinction — that is, that there are consequences of omitting to act; John Harris famously applied this to the problem of organ shortages when he argued that we should perhaps randomly kill innocent people to harvest their organs, since failing to procure organs for those who will die without them is little different than killing them. Grisly as it is, this argument is not quite a transhumanist one, since such organ donation would hardly constitute human enhancement, but it is clear how someone who accepts this kind of radical utilitarianism would go on to accept arguments for manipulating human biology in other outlandish schemes for maximizing “well-being.”
Third, the most extreme form of transhumanism is defined less by adherence to a philosophical position than to a kind of quixotic obsession with technology itself. Today, this obsession with technology manifests in the belief that artificial intelligence will completely transform the world through the Singularity and the uploading of human minds — although futurist speculations built on contemporary technologies have of course been around for a long time. Aldous Huxley’s classic novel Brave New World, for example, imagines a whole world designed in the image of the early twentieth century factory. Though this obsession with technology is not a philosophical position per se, today’s transhumanists have certainly built very elaborate intellectual edifices around the idea of artificial intelligence. Nick Bostrom’s recent book Superintelligence represents a good example of the kind of systematic work these extreme transhumanists have put into thinking through what a world completely shaped by information technology might be like.

*   *   *

Obviously there is a great deal of overlap between these three degrees of transhumanism, and the most mild stage in particular is really quite vaguely defined. If there is a kind of continuum along which these stages run it would be one from relatively open-minded and ecumenical thinkers to those who are increasingly dogmatic and idiosyncratic in their views. The mild transhumanists are usually highly engaged with the real world of policymaking and medicine, and discuss a wide variety of ideas in their work. The moderate transhumanists are more committed to a particular philosophical approach, and the academics at the Oxford’s Uehiro Center for Practical Ethics who apply their dogmatic utilitiarianism to moral problems usually end up with wildly impractical proposals. Though all of these advocates of human enhancement are enthusiastic about technology, for the extreme transhumanists, technology almost completely shapes their moral and political thought; and though their actual influence on public policy is thankfully limited for the time being, it is these more extreme folks, like Ray Kurzweil and Nick Bostrom, and arguably Eric Drexler and the late Robert Ettinger, who tend to be most often profiled in the press and to have a popular following.

Future Selves

In the latest issue of the Claremont Review of Books, political philosopher Mark Blitz — a professor at Claremont McKenna College — has an insightful review of Eclipse of Man, the new book from our own Charles T. Rubin. Blitz writes:

What concerns Charles Rubin in Eclipse of Man is well
conveyed by his title. Human beings stand on the threshold of a world in which
our lives and practices may be radically altered, and our dominance no longer
assured. What began a half-millennium ago as a project to reduce our burdens
threatens to conclude in a realm in which we no longer prevail. The original
human subject who was convinced to receive technology’s benefits becomes
unrecognizable once he accepts the benefits, as if birds were persuaded to
become airplanes. What would remain of the original birds? Indeed, we may be
eclipsed altogether by species we have generated but which are so unlike us
that “we” do not exist at all—or persist only as inferior relics, stuffed for
museums. What starts as Enlightenment ends in permanent night….

Rubin’s major concern is with the contemporary
transhumanists (the term he chooses to cover a variety of what from his standpoint
are similar positions) who both predict and encourage the overcoming of man.

Blitz praises Rubin for his “fair, judicious, and critical summaries” of the transhumanist authors he discusses, and says the author “approaches his topic with admirable thoughtfulness and restraint.”

Some of the subjects Professor Blitz raises in his review essay are worth considering and perhaps debating at greater length, but I would just like to point out one of them. Blitz mentions several kinds of eternal things — things that we are stuck with no matter what the future brings:

One question involves the goods or perfections that our successors might seek or enjoy. Here, I might suggest that these goods cannot change as such, although our appreciation of them may. The allure of promises for the future is connected to the perfections of truth, beauty, and virtue that we currently desire. How could one today argue reasonably against the greater intelligence, expanded artistic talent, or improved health that might help us or those we love realize these goods? Who would now give up freedom, self-direction, and self-reflection?…

There are still other limits that no promise of transhuman change can overcome. These are not only, or primarily, mathematical regularities or apparent scientific laws; they involve inevitable scarcities or contradictions. Whatever happens “virtually,” there are only so many actual houses on actual beautiful beaches. Honesty differs from lying, the loyal and true differ from the fickle and untrustworthy, fame and power cannot belong both to one or a few and to everyone. These limits will set some of the direction for the distribution of goods and our attachment to them, either to restrain competition or to encourage it. They will thus also help to organize political life. Regulating differences of opinion, within appropriate freedom, and judging among the things we are able to choose will remain necessary.

Nonetheless, even if it is true that what we (or any rational being) may properly consider to be good is ultimately invariable, and even if the other limits I mentioned truly exist, our experience of such matters presumably will change as many good things become more available, and as we alter our experience of what is our own — birth, death, locality, and the body.

Let us look carefully at the items listed in this very rich passage. Blitz does not refer to security and health and long life, the goods that modernity arguably emphasizes above all others. Instead, Blitz begins by mentioning the goods of “the perfections of truth, beauty, and virtue.” These are things that “we currently desire” but that also “cannot change as such, although our appreciation of them may.”

Let us set aside for now beauty — which is very complicated, and which may be the item in Blitz’s Platonic triad that would perhaps be likeliest to be transformed by a radical shift in human nature — and focus on truth and virtue. How can they be permanent, unchanging things?

To understand how truth and virtue can be eternal goods, see how Blitz turns to physical realities — the kinds of scarcities of material resources that Malthus and Darwin would have noticed, although those guys tended to think more in terms of scarcities of food than of beach houses. Blitz also mentions traits that seem ineluctably to arise from the existence of those physical limitations. The clash of interests will inevitably lead to scenarios in which there will be “differences of opinions” and in which some actors may be more or less honest, more or less trustworthy. There will arise situations in which honesty can be judged differently from lying, loyalty from untrustworthiness. “Any rational being,” including presumably any distant descendant of humanity, will prize truth and virtue. They are arguably pre-political and pre-philosophical — they are facts of humanity and society that arise from the facts of nature — but they “help to organize political life.”

And yet this entire edifice is wiped away in the last paragraph quoted above. “Our experience” of truth and virtue, Blitz notes, “presumably will change” as our experience of “birth, death, locality, and the body” changes. Still, we may experience truth and virtue differently, but they will continue to provide the goals of human striving, right?

Yet consider some of the transhumanist dreams on offer: a future where mortality is a choice, a future where individual minds merge and melt together into machine-aided masses, a future where the resources of the universe are absorbed and reordered by our man-machine offspring to make a vast “extended thinking entity.” Blitz may be right that “what is good … cannot in the last analysis be obliterated,” but if we embark down the path to the posthuman, our descendants may, in exchange for vast power over themselves and over nature, lose forever the ability to “properly orient” themselves toward the goods of truth and virtue.

Read the whole Blitz review essay here; subscribe to the Claremont Review of Books here; and order a copy of Eclipse of Man here.

Do We Love Robots Because We Hate Ourselves?

A piece by our very own Ari N. Schulman, on today:

… Even as the significance of the Turing Test has been challenged, its attitude continues to characterize the project of strong artificial intelligence. AI guru Marvin Minsky refers to humans as “meat machines.” To roboticist Rodney Brooks, we’re no more than “a big bag of skin full of biomolecules.” One could fill volumes with these lovely aphorisms from AI’s leading luminaries.

And for the true believers, these are not gloomy descriptions but gleeful mandates. AI’s most strident supporters see it as the next step in our evolution. Our accidental nature will be replaced with design, our frail bodies with immortal software, our marginal minds with intellect of a kind we cannot now comprehend, and our nasty and brutish meat-world with the infinite possibilities of the virtual. 

Most critics of heady AI predictions do not see this vision as remotely plausible. But lesser versions might be — and it’s important to ask why many find it so compelling, even if it doesn’t come to pass. Even if “we” would survive in some vague way, this future is one in which the human condition is done away with. This, indeed, seems to be the appeal….

To read the whole thing, click here.