the asymptote of utopia

I just re-read Kim Stanley Robinson’s magnificent Mars trilogy — about which I hope to teach a class someday — and every time I go back to those books I find myself responding differently, and to different elements of the story. Which is a sign of how good they are, I think.

Some have described the Mars trilogy as a kind of utopia, but I don’t think that’s right. Even at the end Mars remains a world with problems, though it must be said that most of them come from Earth. Mars itself has become a pretty stable social order, and even the strongest opponent within the book of how it got that way thinks, in the last paragraph of the final volume, that “Nowhere on this world were people killing each other, nowhere were they desperate for shelter or food, nowhere were they scared for their kids. There was that to be said.” There’s no guarantee that the social order will remain so beneficent, but I think KSR wants us to believe that as time goes by stable harmony becomes more and more strongly established, more difficult to displace. Thus one of his minor characters, Charlotte Dorsa Brevia, is a “metahistorian” who argues for a

broad general movement in history which commentators called her Big Seesaw, a movement from the deep residuals of the dominance hierarchies of our primate ancestors on the savanna, toward the very slow, uncertain, difficult, unpredetermined, free emergence of a pure harmony and equality which would then characterize the very truest democracy. Both of these long-term clashing elements had always existed, Charlotte maintained, creating the big seesaw, with the balance between them slowly and irregularly shifting, over all human history: dominance hierarchies had underlain every system ever realized so far, but at the same time democratic values had been always a hope and a goal, expressed in every primate’s sense of self, and resentment of hierarchies that after all had to be imposed, by force. And so as the seesaw of this meta-metahistory had shifted balance over the centuries, the noticeably imperfect attempts to institute democracy had slowly gained power.

This increasingly stable harmony happens, I think it’s clear, primarily because the First Hundred who colonized Mars are almost all scientists, and as scientists take a rational, empirical approach to solving political problems. That is, the initial conditions of human habitation on Mars are rooted in the practices of science — which is one of the things that leads, much later on, to the first President of Mars being an engineer, which is to say, a pragmatic problem-solver. The politics of solutionism is the best politics, it appears.

However: it’s noteworthy that the people who do the most to shape the ultimate formation of Mars — political, social, and physical — are three characters who are almost invisible in the story, interacting very little with the story’s protagonists (who happen to be the most famous, not just on Mars but also on Earth). Vlad Taneev, Ursula Kohl, and Marina Tokareva work together on a variety of projects: Vlad and Ursula develop the longevity treatments that enable humans to dramatically increase their lifespans; Vlad and Marina work on “areobotany,” that is, adapting plants to the Martian environment; and the three of them together develop an “eco-economics,” that is, a political economy keyed to ecological health — a kind of systematically worked-out version of what KSR refers to in other contexts as the flourishing-of-the-land ethos of Aldo Leopold.

We hear almost nothing directly from this triumvirate during the course of the story, because they basically stay in their lab and work all the time. This is sometimes frustrating for the story’s protagonists, who are always directly involved in politcal events, risking life and limb, giving up their scientific projects in order to serve the common good (or, in the case of Sax Russell, applying technological solutions directly, and sometimes recklessly, to political and social problems). But while KSR makes it clear to us that the protagonists’ work is supremely valuable, he makes it equally clear that they could achieve far less without the isolated, ascetic, constant labor of Vlad, Ursula, and Marina.

So: scientists in the lab + scientists in the public arena = if not quite Utopia something asymptotically approaching it. A Big Seesaw, yes, but the amplitude of its oscillations grows ever smaller, almost to the point, as the story comes to an end, that they’re impossible to discern. In short, an epistocracy. It’s not a simplistic model, like Neil deGrasse Tyson’s proposed Rationalia: KSR understands the massive complexities of human interaction, and one of the best elements of the book is his portrayal of how the paradigmatically socially inept lab-rat Sax Russell comes to understand them as well. But the story really does display a great deal of confidence that if we put the scientists in charge things are going to get much better.

In his superb history of science fiction — now in a fancy new second edition — Adam Roberts writes,

With a few exceptions all [KSR’s] characters are decent human beings, with functional quantities of empathy and a general desire to make things work for the whole. Robinson’s position seems to be that, statistical outliers aside, we all basically want to get along, to not hurt other people, to live in balance…. That niceness — the capacity for collective work towards a common goal, the tendency not to oppress or exploit — is common to almost all the characters Robinson has written. His creations almost always lack inner cruelty, or mere unmotivated spitefulness, which may be a good thing. I’m not saying he’s wrong about human nature, either — although it is more my wish than my belief. What it does mean is that Robinson writes novels that tend to the asymptote of utopia, without actually attempting to represent that impossible goal.

(When I decided to insert that passage in this post, I didn’t remember that Adam had used the language of the asymptote, which I also employ above. Great minds do think alike, after all. I shall acknowledge the probably unconscious influence of Adam’s thinking on my own in my title.)

So I have two questions:

1) Are natural scientists the true epistoi?

2) How might the case for epistocracy of any kind be altered if we take the position that human beings are not nearly as nice as KSR thinks they are?

Automation, Robotics, and the Economy

The Joint Economic Committee — a congressional committee with members from both the Senate and the House of Representatives — invited me to testify in a hearing yesterday on “the transformative impact of robots and automation.” The other witnesses were Andrew McAfee, an M.I.T. professor and coauthor of The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (his written testimony is here) and Harry Holzer, a Georgetown economist who has written about the relationship between automation and the minimum wage (his written testimony is here).

My written testimony appears below, slightly edited to include a couple things that arose during the hearing. Part of the written testimony is based on an essay I wrote a few years ago with Ari Schulman called “The Problem with ‘Friendly’ Artificial Intelligence.” Video of the entire hearing can be found here.

*   *   *

Testimony Presented to the Joint Economic Committee:
The Transformative Impact of Robots and Automation
Adam Keiper
Fellow, Ethics and Public Policy Center
Editor, The New Atlantis
May 25, 2016

Mr. Chairman, Ranking Member Maloney, and members of the committee, thank you for the opportunity to participate in this important hearing on robotics and automation. These aspects of technology have already had widespread economic consequences, and in the years ahead they are likely to profoundly reshape our economic and social lives.

Today’s hearing is not the first time Congress has discussed these subjects. In fact, in October 1955, a subcommittee of this very committee held a hearing on automation and technological change.[1] That hearing went on for two weeks, with witnesses mostly drawn from industry and labor. It is remarkable how much of the public discussion about automation today echoes the ideas debated in that hearing. Despite vast changes in technology, in the economy, and in society over the past six decades, many of the worries, the hopes, and the proposed solutions suggested in our present-day literature on automation, robotics, and employment would sound familiar to the members and witnesses present at that 1955 hearing.

It would be difficult to point to any specific policy outcomes from that old hearing, but it is nonetheless an admirable example of responsible legislators grappling with immensely complicated questions. A free people must strive to govern its technologies and not passively be governed by them. So it is an honor to be a part of that tradition with today’s hearing.

In my remarks, I wish to make five big, broad points, some of them obvious, some more counterintuitive.

A good place to start discussions of this sort is with a few words of gratitude and humility. Gratitude, that is, for the many wonders that automation, robotics, and artificial intelligence have already made possible. They have made existing goods and services cheaper, and helped us to create new kinds of goods and services, contributing to our prosperity and our material wellbeing.

And humility because of the poorness of our ability to peer into the future. When reviewing the mountains of books and magazine articles that have sought to predict what the future holds in automation and related fields, when reading the hyped tech headlines or when looking at the many charts and tables extrapolating from the past to help us forecast the future, it is striking to see how often our predictions go wrong.

Very little energy has been invested in systematically understanding why futurism fails — that is, why, beyond the simple fact that the future hasn’t happened yet, we have generally not been very good at predicting what it will look like. For the sake of today’s discussion, I want to raise just a few points, each of which can be helpful in clarifying our thinking when it comes to automation and robotics.

First there is the problem of timeframes. Very often, economic analyses and tech predictions about automation discuss kinds of jobs that are likely to be automated without any real discussion of when. This leads to strange conversations, as when one person is interested in what the advent of driverless vehicles might mean for the trucking industry, and his interlocutor is more interested in, say, the possible rise of artificial superintelligences that could wipe out all life on Earth. The timeframes under discussion at any given moment ought to be explicitly stated.

Second there is the problem of context. Debates about the future of one kind of technology rarely take into account other technologies that might be developed, and how those other technologies might affect the one under discussion. When one area of technology advances, others do not just stand still. How might automation and robotics be affected by developments in energy use and storage, or advanced nanotechnology (sometimes also called molecular manufacturing), or virtual reality and augmented reality, or brain-machine interfaces, or various biotechnologies, or a dozen other fields?

And of course it’s not only other technologies that evolve. In order to be invented, built, used, and sustained, all technologies are enmeshed in a web of cultural practices and mores, and legal and political norms. These things do not stand still either — and yet when discussing the future of a given technology, rarely is attention paid to the way these things touch upon one another.

All of which is to say that, as you listen to our conversation here today, or as you read books and articles about the future of automation and robotics, try to keep in mind what I call the “chain of uncertainties”:

Just because something is conceivable or imaginable
does not mean it is possible.
Even if it is possible, that does not mean it will happen.
Even if it happens, that does not mean it will happen in the way you envisioned.
And even if it happens in something like the way you envisioned,
there will be unintended, unexpected consequences

Automation is not new. For thousands of years we have made tools to help us accomplish difficult or dangerous or dirty or tedious or tiresome tasks, and in some sense today’s new tools are just extensions of what came before. And worries about automation are not new either  — they date back at least to the early days of the Industrial Revolution, when the Luddites revolted in England over the mechanization and centralization of textile production. As I mentioned above, this committee was already discussing automation some six decades ago — thinking about thinking machines and about new mechanical modes of manufacturing.

What makes today any different?

There are two reasons today’s concerns about automation are fundamentally different from what came before. First, the kinds of “thinking” that our machines are capable of doing is changing, so that it is becoming possible to hand off to our machines ever more of our cognitive work. As computers advance and as breakthroughs in artificial intelligence (AI) chip away at the list of uniquely human capacities, it becomes possible to do old things in new ways and to do new things we have never before imagined.

Second, we are also instantiating intelligence in new ways, creating new kinds of machines that can navigate and move about in and manipulate the physical world. Although we have for almost a century imagined how robotics might transform our world, the recent blizzard of technical breakthroughs in movement, sensing, control, and (to a lesser extent) power is bringing us for the first time into a world of autonomous, mobile entities that are neither human nor animal.

To simplify a vast technical and economic literature, there are basically three futurist scenarios for what the next several decades hold in automation, robotics, and artificial intelligence:

Scenario 1 – Automation and artificial intelligence will continue to advance, but at a pace sufficiently slow that society and the economy can gradually absorb the changes, so that people can take advantage of the new possibilities without suffering the most disruptive effects. The job market will change, but in something like the way it has evolved over the last half-century: some kinds of jobs will disappear, but new kinds of jobs will be created, and by and large people will be able to adapt to the shifting demands on them while enjoying the great benefits that automation makes possible.

Scenario 2 – Automation, robotics, and artificial intelligence will advance very rapidly. Jobs will disappear at a pace that will make it difficult for the workforce to adapt without widespread pain. The kinds of jobs that will be threatened will increasingly be jobs that had been relatively immune to automation — the “high-skilled” jobs that generally involved creativity and problem-solving, and the “low-skilled” jobs that involved manual dexterity or some degree of adaptability and interpersonal relations. The pressures on low-skilled American workers will exacerbate the pressures already felt because of competition against foreign workers paid lower wages. Among the disappearing jobs may be those at the lower-wage end of the spectrum that we have counted on for decades to instill basic workplace skills and values in our young people, and that have served as a kind of employment safety net for older people transitioning in their lives. And the balance between labor and capital may (at least for a time) shift sharply in favor of capital, as the share of gross domestic product (GDP) that flows to the owners of physical capital (e.g., the owners of artificial intelligences and robots) rises and the share of GDP that goes to workers falls. If this scenario unfolds quickly, it could involve severe economic disruption, perhaps social unrest, and maybe calls for political reform. The disconnect between productivity and employment and income in this scenario also highlights the growing inadequacy of GDP as our chief economic statistic: it can still be a useful indicator in international competition, but as an indicator of economic wellbeing, or as a proxy for the material satisfaction or happiness of the American citizen, it is clearly not succeeding.

Scenario 3 – Advances in automation, robotics, and artificial intelligence will produce something utterly new. Even within this scenario, the range of possibilities is vast. Perhaps we will see the creation of “emulations,” minds that have been “uploaded” into computers. Perhaps we will see the rise of powerful artificial “superintelligences,” unpredictable and dangerous. Perhaps we will reach a “Singularity” moment after which everything that matters most will be different from what came before. These types of possibilities are increasingly matters of discussion for technologists, but their very radicalness makes it difficult to say much about what they might mean at a human scale — except insofar as they might involve the extinction of humanity as we know it. [NOTE: During the hearing, Representative Don Beyer asked me whether he and other policymakers should be worried about consciousness emerging from AI; he mentioned Elon Musk and Stephen Hawking as two individuals who have suggested we should worry about this. “Think Terminator,” he said. I told him that these possibilities “at the moment … don’t rise to the level of anything that anyone on this committee ought to be concerned about.”]

One can make a plausible case for each of these three scenarios. But rather than discussing their likelihood or examining some of the assumptions and aspirations inherent in each scenario, in the limited time remaining, I am going to turn to three other broad subjects: some of the legal questions raised by advances in artificial intelligence and automation; some of the policy ideas that have been proposed to mitigate some of the anticipated effects of these changes; and a deeper understanding of the meaning of work in human life.

The advancement of artificial intelligence and autonomous robots will raise questions of law and governance that scholars are just beginning to grapple with. These questions are likely to have growing economic and perhaps political consequences in the years to come, no matter which of the three scenarios above you consider likeliest.

The questions we might be expected to face will emerge in matters of liability and malpractice and torts, property and contractual law, international law, and perhaps laws related to legal personhood. Although there are precedents — sometimes in unusual corners of the law — for some of the questions we will face, others will arise from the very novelty of the artificial autonomous actors in our midst.

By way of example, here are a few questions, starting with one that has already made its way into the mainstream press:

  • When a self-driving vehicle crashes into property or harms a person, who is liable? Who will pay damages?
  • When a patient is harmed or dies during a surgical operation conducted by an autonomous robotic device upon the recommendation of a human physician, who is liable and who pays?
  • If a robot is autonomous but is not considered a person, who owns the creative works it produces?
  • In a combat setting, who is to be held responsible, and in what way, if an autonomous robot deployed by the U.S. military kills civilian noncombatants in violation of the laws of war?
  • Is there any threshold of demonstrable achievement — any performed ability or set of capacities — that a robot or artificial intelligence could cross in order to be entitled to legal personhood?

These kinds of questions raise matters of justice, of course, but they have economic implications as well — not only in terms of the money involved in litigating cases, but in terms of the effects that the legal regime in place will have on the further development and implementation of artificial intelligence and robotics. It will be up to lawyers and judges, and lawmakers at the federal, state, and local levels, to work through these and many other such matters.

There are, broadly speaking, two kinds of ideas that have most often been set forth in recent years to address the employment problems that may be created by an increasingly automated and AI-dominated economy.

The first category involves adapting workers to the new economy. The workers of today, and even more the workers of tomorrow, will need to be able to pick up and move to where the jobs are. They should engage in “lifelong learning” and “upskilling” whenever possible to make themselves as attractive as possible to future employers. Flexibility must be their byword.

Of course, education and flexibility are good things; they can make us resilient in the face of the “creative destruction” of a churning free economy. Yet we must remember that “workers” are not just workers; they are not just individuals free and detached and able to go wherever and do whatever the market demands. They are also members of families — children and parents and siblings and so on — and members of communities, with the web of connections and ties those memberships imply. And maximizing flexibility can be detrimental to those kinds of relationships, relationships that are necessary for human flourishing.

The other category of proposal involves a universal basic income — or what is sometimes called a “negative income tax” — guaranteed to every individual, even if he or she does not work. This can sound, in our contemporary political context, like a proposal for redistributing wealth, and it is true that there are progressive theorists and anti-capitalist activists who support it. But this idea has also been discussed favorably for various reasons by prominent conservative and libertarian thinkers. It is an intriguing idea, and one without many real-life models that we can study (although Finland is currently contemplating an interesting partial experiment).

A guaranteed income certainly would represent a sea change in our nation’s economic system and a fundamental transformation in the relationship between citizens and the state, but perhaps this transformation would be suited to the technological challenge we may face in the years ahead. Some of the smartest and most thoughtful analysts have discussed how to avoid the most obvious problems a guaranteed income might create — such as the problem of disincentivizing work. Especially provocative is the depiction of guaranteed income that appears in a 2008 book written by Joseph V. Kennedy, a former senior economist with the Joint Economic Committee; in his version of the policy, the guaranteed income would be structured in such a way as to encourage a number of good behaviors. Anyone interested in seriously considering guaranteed income should read Kennedy’s book.[2]

Should we really be worrying so much about the effects of robots on employment? Maybe with the proper policies in place we can get through a painful transition and reach a future date when we no longer need to work. After all, shouldn’t we agree with Arthur C. Clarke that “The goal of the future is full unemployment”?[3] Why work?

This notion, it seems to me, raises deep questions about who and what we are as human beings, and the ways in which we find purpose in our lives. A full discussion of this subject would require drinking deeply of the best literary and historical investigations of work in human life — examining how work is not only toil for which we are compensated, but how it also can be a source of dignity, structure, meaning, friendship, and fulfillment.

For present purposes, however, I want to just point to two competing visions of the future as we think about work. Because, although science fiction offers us many visions of the future in which man is destroyed by robots, or merges with them to become cyborgs, it offers basically just two visions of the future in which man coexists with highly intelligent machines. Each of these visions has an implicit anthropology — an understanding of what it means to be a human being. In each vision, we can see a kind of liberation of human nature, an account of what mankind would be in the absence of privation. And in each vision, some latent human urges and longings emerge to dominate over others, pointing to two opposing inclinations we see in ourselves.

The first vision is that of the techno-optimist or -utopian: Thanks to the labor and intelligence of our machines, all our material wants are met and we are able to lead lives of religious fulfillment, practice our hobbies, pursue our intellectual and creative interests.

Recall John Adams’s famous 1780 letter to Abigail: “I must study Politicks and War that my sons may have liberty to study Mathematicks and Philosophy. My sons ought to study Mathematicks and Philosophy, Geography, natural History, Naval Architecture, navigation, Commerce and Agriculture, in order to give their Children a right to study Painting, Poetry, Musick, Architecture, Statuary, Tapestry and Porcelaine.”[4] This is somewhat like the dream imagined in countless stories and films, in which our robots make possible a Golden Age that allows us to transcend crass material concerns and all become gardeners, artists, dreamers, thinkers, lovers.

By contrast, the other vision is the one depicted in the 2008 film WALL-E, and more darkly in many earlier stories — a future in which humanity becomes a race of Homer Simpsons, a leisure society of consumption and entertainment turned to endomorphic excess. The culminating achievement of human ingenuity, robotic beings that are smarter, stronger, and better than ourselves, transforms us into beings dumber, weaker, and worse than ourselves. TV-watching, video-game-playing blobs, we lose even the energy and attention required for proper hedonism: human relations wither and natural procreation declines or ceases. Freed from the struggle for basic needs, we lose a genuine impulse to strive; bereft of any civic, political, intellectual, romantic, or spiritual ambition, when we do have the energy to get up, we are disengaged from our fellow man, inclined toward selfishness, impatience, and lack of sympathy. Those few who realize our plight suffer from crushing ennui. Life becomes nasty, brutish, and long.

Personally, I don’t think either vision is quite right. I think each vision — the one in which we become more godlike, the other of which we become more like beasts — is a kind of deformation. There is good reason to challenge some of the technical claims and some of the aspirations of the AI cheerleaders, and there is good reason to believe that we are in important respects stuck with human nature, that we are simultaneously beings of base want and transcendent aspiration; finite but able to conceive of the infinite; destined, paradoxically, to be free.

Mr. Chairman, the rise of automation, robotics, and artificial intelligence raises many questions that extend far beyond the matters of economics and employment that we’ve discussed today — including practical, social, moral, and perhaps even existential questions. In the years ahead, legislators and regulators will be called upon to address these technological changes, to respond to some things that have already begun to take shape and to foreclose other possibilities. Knowing when and how to act will, as always, require prudence.

In the years ahead, as we contemplate both the blessings and the burdens of these new technologies, my hope is that we will strive, whenever possible to exercise human responsibility, to protect human dignity, and to use our creations for the improvement of truly human flourishing.

Thank you.


[1] “Automation and Technological Change,” hearings before the Subcommittee on Economic Stabilization of the Joint Committee on the Economic Report, Congress of the United States, Eighty-fourth Congress, first session, October 14, 15, 17, 18, 24, 25, 26, 27, and 28, 1955 (Washington, D.C.: G.P.O., 1955),

[2]  Joseph V. Kennedy, Ending Poverty: Changing Behavior, Guaranteeing Income, and Reforming Government (Lanham, Md.: Rowman and Littlefield, 2008).

[3]  Arthur C. Clarke, quoted by Jerome Agel, “Cocktail Party” (column), The Realist 86, Nov.–Dec. 1969, page 32, This article is a teaser for a book Agel edited called The Making of Kubrick’s 2001 (New York: New American Library/Signet, 1970), where the same quotation from Clarke appears on page 311. Italics added. The full quote reads as follows: “The goal of the future is full unemployment, so we can play. That’s why we have to destroy the present politico-economic system.”

[4]  John Adams to Abigail Adams (letter), May 12, 1780, Founders Online, National Archives ( Source: The Adams Papers, Adams Family Correspondence, vol. 3, April 1778 – September 1780, eds. L. H. Butterfield and Marc Friedlaender (Cambridge, Mass.: Harvard, 1973), pages 341–343.

on sustainability

Makoko neighborhood, Lagos Lagoon

Ross Douthat writes:

It’s possible to believe that climate change is happening while doubting that it makes “the present world system … certainly unsustainable,” as the pope suggests. Perhaps we’ll face a series of chronic but manageable problems instead; perhaps “radical change” can, in fact, be persistently postponed.

Indeed, perhaps our immediate future fits neither the dynamist nor the catastrophist framework.

We might have entered a kind of stagnationist position, a sustainable decadence, in which the issues Pope Francis identifies percolate without reaching a world-altering boil.

In that case, the deep critique our civilization deserves will have to be advanced without the threat of imminent destruction. The arguments in “Laudato Si’” will still resonate, but they will have to be structured around a different peril: Not a fear that the particular evils of our age can’t last, but the fear that actually, they can.

I think this is a very powerful response, but one that needs unpacking. The key terms are “sustainable” and “manageable,” and the key questions are “Sustainable for whom?” and “Manageable by whom?”

(Please note that what follows is written under the assumption that the standard predictions are right: that anthropogenic climate change exists and will continue, that temperatures and sea levels will rise, etc. If those predictions are wrong and the climate does not alter significantly, then “the present world system” will continue to function — unless rendered unsustainable for wholly other reasons.)

To write as Ross does here is to take a government’s-eye view of the matter — or perhaps a still higher-level view. One example: Rising sea levels will be neither sustainable nor manageable for poor people whose homes are drowned, and who will have to move inland, perhaps in some cases into refugee camps. But it is unlikely that these people will be able to stage a successful rebellion against the very political order that has left them in poverty. Resources will need to be diverted to manage them; but in the developed world that will probably be possible.

In poorer countries with less extensive political infrastructures, chaos could ensue. But those countries are typically not essential to the functioning of “the present world system,” and indeed, the people who run that system may find the resources of such countries easier to exploit when they become politically incoherent. Thus it’s not hard to imagine, as a long-term consequence of climate change, multinational corporations becoming ever more important and influential — a scenario imagined in some detail by Kim Stanley Robinson in his Mars Trilogy. In such an environment, “the present world system” might actually become more rather than less secure.

In light of these thoughts, it might be worthwhile to look at the whole paragraph in which the Pope deems the current order “unsustainable”:

On many concrete questions, the Church has no reason to offer a definitive opinion; she knows that honest debate must be encouraged among experts, while respecting divergent views. But we need only take a frank look at the facts to see that our common home is falling into serious disrepair. Hope would have us recognize that there is always a way out, that we can always redirect our steps, that we can always do something to solve our problems. Still, we can see signs that things are now reaching a breaking point, due to the rapid pace of change and degradation; these are evident in large-scale natural disasters as well as social and even financial crises, for the world’s problems cannot be analyzed or explained in isolation. There are regions now at high risk and, aside from all doomsday predictions, the present world system is certainly unsustainable from a number of points of view, for we have stopped thinking about the goals of human activity. “If we scan the regions of our planet, we immediately see that humanity has disappointed God’s expectations”.

The key phrase here is “from a number of points of view.” It might be that national governments remain stable, that the worldwide economic order continues in its present form, and yet the whole enterprise genuinely is unsustainable in ecological and moral terms — in terms of what damage to the earth and to human well-being the system inflicts. Devastation to the created order, of which humanity is a part, may prove to be politically sustainable, but it will be devastation nonetheless.

the rich are different

In his great autobiographical essay “Such, Such Were the Joys,” George Orwell remembers his schooldays:

There never was, I suppose, in the history of the world a time when the sheer vulgar fatness of wealth, without any kind of aristocratic elegance to redeem it, was so obtrusive as in those years before 1914. It was the age when crazy millionaires in curly top-hats and lavender waistcoats gave champagne parties in rococo house-boats on the Thames, the age of diabolo and hobble skirts, the age of the ‘knut’ in his grey bowler and cut-away coat, the age of The Merry Widow, Saki’s novels, Peter Pan and Where the Rainbow Ends, the age when people talked about chocs and cigs and ripping and topping and heavenly, when they went for divvy week-ends at Brighton and had scrumptious teas at the Troc. From the whole decade before 1914 there seems to breathe forth a smell of the more vulgar, un-grown-up kind of luxury, a smell of brilliantine and crème-de-menthe and soft-centred chocolates — an atmosphere, as it were, of eating everlasting strawberry ices on green lawns to the tune of the Eton Boating Song. The extraordinary thing was the way in which everyone took it for granted that his oozing, bulging wealth of the English upper and upper-middle classes would last for ever, and was part of the order of things. After 1918 it was never quite the same again. Snobbishness and expensive habits came back, certainly, but they were self-conscious and on the defensive. Before the war the worship of money was entirely unreflecting and untroubled by any pang of conscience. The goodness of money was as unmistakable as the goodness of health or beauty, and a glittering car, a title or a horde of servants was mixed up in people’s minds with the idea of actual moral virtue.

What follows is purely subjective and impressionistic, but: I think in America in 2013 we’re back to that point, back, that is, to an environment in which “the worship of money [is] entirely unreflecting and untroubled by any pang of conscience.”

We have plenty of evidence that the very rich are deficient in generosity and lacking in basic human empathy, and yet there seems to be a general confidence in the very rich — a widespread belief that those who have amassed great wealth, by whatever means, can be trusted to fix even the most intractable social problems.

Consider in this light, and as just one example, the widespread enthusiasm for the rise of the MOOC. The New York Times called 2012 The Year of the MOOC in an article comprised almost wholly of MOOC-makers’ talking-points, and even when the most prominent advocate of MOOCs abandons them as a lost cause, he still gets reverential puff-pieces. Some people can do no wrong. They just have to have enough money — and to have gotten it in the right way.

I think this “entirely unreflecting” “worship of money” is sustained by one thing above all: wealth-acquisition in America today, in comparison to wealth-acquisition in the Victorian age or across the Pacific in China, feels clean. Pixel-based and sootless. No sweatshops in sight — those are well-hidden in other parts of the world. We may happen to find out that Amazon’s warehouses aren’t that different than sweatshops, but that doesn’t seem to make much of a difference, in large part because our own dealings with Amazon are so frictionless and, again, clean: no handing over of cash, not even credit cards after you enter your number that first time, just pointing and clicking and waiting for the package to show up on your porch. Oh look, there it is. Not only are the actual conditions of production hidden, but even the nature of the transaction is invisible, de-materialized. (I could be talking about MOOCs here as well: they work the same way.)

It’s almost impossible to think of Jeff Bezos or Steve Jobs or Sebastian Thrun as robber baron industrialists or even as captains of industry, even if the occasional article appears identifying them as such, because what they do doesn’t fit our imaginative picture of “industry.” They seem more like the economic version of the Mr. Fusion Home Energy Reactor in Doc’s DeLorean: you just throw any old crap in and pure hi-res digital money comes out.

Cue Donald Fagen:

Just machines to make big decisions
Programmed by fellas with compassion and vision
We’ll be clean when that work is done
We’ll be eternally free, yes, and eternally young

Happy Thanksgiving, everybody.

pay the writer?

Philip Hensher has a point:

Frustration spilled out on Facebook after a University of Cambridge professor of modern German and comparative culture, Andrew Webber, branded the acclaimed literary novelist Philip Hensher priggish and ungracious when the author refused to write an introduction to the academic’s forthcoming guide to Berlin literature for free.

Hensher said: “He’s written a [previous] book about writers in Berlin during the 20th century, but how does he think that today’s writers make a living? It shows a total lack of support for how writers can live. I’m not just saying it for my sake: we’re creating a world where we’re making it impossible for writers to make a living.”

Hensher, who was shortlisted for the Man Booker prize in 2008 for his novel The Northern Clemency, a portrait of Britain’s social landscape through the Thatcher era, wrote his first two novels while working a day job, but said: “I always had an eye to when I would make a living from it.”

“If people who claim to respect literature – professors of literature at Cambridge University – expect it, then I see no future for young authors. Why would you start on a career of it’s not just impossible, but improper, to expect payment?”

What Andrew Webber seems to be forgetting is that he has a day job, and for those of us in that situation the rules may be different — in fact, surely the rules are different, but I’m just not sure precisely how.

Almost everyone understands that when you write a book (whether academic or popular) you’ll be paid royalties as a percentage of sales; almost everyone understands that when you write an academic article you won’t be paid at all except insofar as publication itself is a kind of currency that you may be able to exchange for tenure or promotion or a more attractive position elsewhere. And in any case doing such writing is part of the academic job description. This kind of publication rarely has certain and measurable value; but as a general proposition its value is clear — for academics. However, it’s completely unfair and unreasonable to expect non-academics to write for no money when they’re not getting anything else for it either: every professional writer should join in the Harlan Ellison Chorus: PAY THE WRITER.

That said, there are a great many fuzzy areas here, especially in relation to online writing, because every major outlet is constantly starved for new content — more content than almost any outlet can reasonably be expected to pay, or pay more than a pittance, for. Thus Slate’s Future Tense blog asked to re-post a post I wrote here — but of course did not offer to pay for it. I said yes, but should I have?

I didn’t really expect to get anything out of it — I suppose a couple of people clicked over to this blog, but I think few common convictions are less supported by evidence than the one that says you get “publicity value” by “getting your name out there.” (No direct route from there to cash on the barrelhead.) But it didn’t seem as though it would be hurting anyone, so why not?

Well, one might argue that I can support the Ellison Principle (PAY THE WRITER) by insisting on being paid for everything I write, online and offline: if writers were to form more of a common front on this matter, then we could alter the expectations and get online outlets to see paying for writing as the norm.

But magazines and websites have limited resources, so if every writer insisted on getting paid then there’d be far less new content for them to post and publish — and few of us would be happy with that. And in any case, writers would never be able to achieve a uniform common front: there will always be people, especially younger, less established writers, who believe in the “get your name out there” argument and will act accordingly.

And here’s another complication: since I do have a day job and am not trying to make a living by my writing, maybe if I don’t ask for financial compensation I can liberate money for people who really need it. Or would I just be tempting editors to publish less stuff by full-time writers because they can get free content from me?

I CAN’T FIGURE THIS OUT. Help me, people.

The economics of magic pills: Questions for Methuselists

In its 2003 report Beyond Therapy (discussed in a symposium in the Winter 2004 New Atlantis), the President’s Council on Bioethics concludes that “the more fundamental ethical questions about taking biotechnology ‘beyond therapy’ concern not equality of access, but the goodness or badness of the things being offered and the wisdom of pursuing our purposes by such means.” That is certainly right, and it is why this blog chiefly focuses on the deeper questions related to the human meaning of our technological aspirations. That said, the question of equality of access is still worth considering, not least because it is one of the few ethical questions considered legitimate by many transhumanists, and so it might provide some common ground for discussion.

In the New York Times, the economist Greg Mankiw, while discussing health care, offers a fascinating thought experiment that sheds some light on the issue of access:

Imagine that someone invented a pill even better than the one I take. Let’s call it the Dorian Gray pill, after the Oscar Wilde character. Every day that you take the Dorian Gray, you will not die, get sick, or even age. Absolutely guaranteed. The catch? A year’s supply costs $150,000.

Anyone who is able to afford this new treatment can live forever. Certainly, Bill Gates can afford it. Most likely, thousands of upper-income Americans would gladly shell out $150,000 a year for immortality.

Most Americans, however, would not be so lucky. Because the price of these new pills well exceeds average income, it would be impossible to provide them for everyone, even if all the economy’s resources were devoted to producing Dorian Gray tablets.

The standard transhumanist response to this problem is voiced by Ray Kurzweil in The Singularity Is Near: “Drugs are essentially an information technology, and we see the same doubling of price-performance each year as we do with other forms of information technology such as computers, communications, and DNA base-pair sequencing”; because of that exponential growth, “all of these technologies quickly become so inexpensive as to become almost free.”

Though my cell phone bill begs to differ, Kurzweil’s point may well be true. And yet if that were the whole picture, we might expect one of the defining trends of the past half century to have been the steady decline in the cost of health care. Instead, as Mankiw notes:

These questions may seem the stuff of science fiction, but they are not so distant from those lurking in the background of today’s health care debate. Despite all the talk about waste and abuse in our health system (which no doubt exists to some degree), the main driver of increasing health care costs is advances in medical technology. The medical profession is always figuring out new ways to prolong and enhance life, and that is a good thing, but those new technologies do not come cheap. For each new treatment, we have to figure out if it is worth the price, and who is going to get it.

However quickly the costs for a given set of medical technologies falls, the rate at which expensive new technologies are developed grows even faster — as, more significantly, does our demand for them. In the case of medicine, what begins as a miraculous cure comes in time to be expected as routine, and eventually even to be considered a right (think of organ transplantation, for example). What Kurzweil and the like fail to grasp is that, absent some wise guiding principles about the purpose of our biotechnical power, as we gain more of it we paradoxically become less satisfied with it and only demand more still.

But if our biotechnical powers were to grow to the point that “defeat” of death truly seemed imminent, the demand for medicine would only grow with it. The advocates of radical life extension already believe death to be a tragedy that inflicts incalculable misery. That increased demand would only magnify the perceived injustice of death (why must my loved one die, when So-and-So, by surviving one year more, can live forever?), and could create such a sense of urgency that desperate measures — demeaning research, economy-endangering spending — would seem justified.

For believers in the technological convulsion of the Singularity, the question of access and distribution is even more pointed, since the gap between the powers of the post-Singularity “haves” and “have-nots” would dwarf present-day inequality — and the “haves” might well want to keep the upper hand. To paraphrase the Shadow, “Who knows what evil lurks in the hearts of posthumanity?”

(Hat tip: David Clift-Reaves via Marginal Revolution.)

[Photo credit: Flickr user e-magic]

Investing in the Singularity?

[Continuing coverage of the 2009 Singularity Summit in New York City.]
The last talk before the final break of the conference is on venture capitalism, moderated by CNBC’s Robert Pisani, and including Peter Thiel, David Rose, and Mark Gorenberg.
Thiel mentions that many companies take a very long time to become profitable. He says that the first five or six investors in FedEx lost money, but it was the seventh who made a lot. So, he says, he likes to invest in companies that expect to lose money for a long time. They tend to be undervalued.

[From left: Peter Thiel, Mark Gorenberg, David S. Rose, and moderator Bob Pisani]
The mod asks how venture capitalists deal with the Singularity in making their decisions. One of the panelists responds that they’re all bullish about technology, echoing Thiel: if technology does not advance, they’re all screwed. But it sounds like he’s effectively saying that they keep it in mind but it doesn’t really impact investing. He doesn’t really look farther out than ten years. Thiel says he does think that there are some impacts — among other things, it’s a good time to invest in biotech. (“Yes!” says the woman next to me, in a duh voice.)
A questioner asks about why none of the panelists have mentioned investing in A.I. The guy has a very annoyed tone, as he did when he asked a question in Thiel’s talk. Thiel doesn’t seem enthused:
Peter Thiel
But another panelist says yes, good, let’s invest more in high-tech companies! Rapturous applause.

Peter Thiel on the Singularity and economic growth

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Peter Thiel is a billionaire, known for cofounding PayPal and for his early involvement in Facebook. He also may be the largest benefactor of the Singularity Summit and longevity-related research. His talk today is on “Macroeconomics and Singularity.” (Abstract and bio.)
Thiel begins by outlining common concerns about the Singularity, and then asks the members of this friendly audience to raise their hands to indicate which they are worried about:

1. Robots kill humans (Skynet scenario). Maybe 10% raise their hands.

2. Runaway biotech scenario. 30% raise hands.

3. The “gray goo scenario.” 5% raise hands.

4. War in the Middle East, augmented by new technology. 20% raise hands.

5. Totalitarian state using technology to oppress people. 15% raise hands.

6. Global warming. 10% raise hands. (Interesting divergence again between transhumanism and environmentalism.)

7. Singularity takes too long to happen. 30% raise hands — and there is much laughter and applause.

Thiel says that, although it is rarely talked about, perhaps the most dangerous scenario is that the Singularity takes too long to happen. He notes that several decades ago, people expected American real wages to skyrocket and the amount of time working to decrease. Americans were supposed to be rich and bored. (Indeed, Thiel doesn’t mention it, but the very first issue of The Public Interest, back in 1965, included essays that worried about this precise concern, under the heading “The Great Automation Question.”) But it didn’t happen — real wages have stayed the same since 1973 and Americans work many more hours per year than they used to.
Thiel says we should understand the recent economic problems not as a housing crisis or credit crisis but rather as a technology crisis. All forms of credit involve claims on the future. Credit works, he says, if you have a background of growth — if everything grows every year, you won’t have a credit crisis. But a credit crisis means that claims for the future can’t be matched.
He says that if we want to keep society stable, we have to keep growing, or else we can’t support all of the projected growth that we’ve currently leveraged. Global stability, he says, depends on a “Good Singularity.”
In essence, we have to keep growing because we’ve already bet on the promise that we’ll grow. (I tried this argument in a poker game once for why a pair of threes should trump a flush — I already allocated my winnings for this game to pay next month’s rent! — but it didn’t take.)
Thiel’s talk is over halfway into his forty-minute slot. He is an engaging speaker with a fascinating thesis. The questioners are lining up quickly — far more lined up than for any other speaker so far, including Kurzweil.
In response to the first question about the current recession, Thiel predicts there will be no more bubbles in the next twenty years; either it will boom continuously or stay bust, but people are too aware now, and the cycle pattern has been broken. The next questioner asks about regulation and government involvement — should all this innovation happen in the private sector, or should the government fund it? Thiel says that the government isn’t anywhere near focused enough on science and technology right now, and he doesn’t think it has any role to play in innovation.
Peter Thiel
Another questioner asks about Francis Fukuyama’s book, Our Posthuman Future, in which he argues that once we create superhumans, there will be a superhuman/human divide. (Fukuyama has also called transhumanism one of the greatest threats to the welfare of humanity.) Thiel says it’s implausible — technology filters down, just like cell phones. He says that it’s a non-argument and that Fukuyama is hysterical, to rapturous applause from the audience.
After standing in line, holding my laptop with one hand and blogging with another, I take the stand and ask Thiel about the limits of his projection: if we’re constantly leveraging against the future, what happens when growth reaches its limits? Will we hit some sort of catastrophic collapse? He says that we may reach some point in the future where we have, basically, a repeat of what we had over the last two years, when we can’t meet growth and we have another collapse. So are there no limits to growth, I ask? He says if we hit other road bumps we’ll have to just deal with it then. I try again, but the audience becomes restless and Thiel essentially repeats his point, so I go sit down.
What I should have asked was: Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable, whether it happens now or later?