designer deaths

Death is such a bummer, but you know, that’s just a design problem:

Bennett’s fixation on death began with the death of his father. He was close to his dad; in a recent talk, he likened his childhood to the plot of Billy Elliot, a story “about a little nelly gay boy who twirled in the northeast of England” and the exceedingly masculine father who dared to love him. Bennett, in fact, traces his identity as a designer to the day in 1974 when his father, Jim, a former military pilot, brought home The Golden Hands Encyclopedia of Crafts. Jim Bennett then spent the next two years sitting with his son, making macramé and knitting God’s eyes, so that sensitive little kid could explore his talent and find his confidence. In 2001, Bennett’s father wound up in a hospital bed, stricken with bone cancer. Bennett was 5,000 miles away at home in San Francisco. He told his father he’d be on the next flight, but Jim ordered him not to come. Eventually, Bennett understood why. His father had painstakingly maintained his dignity his entire life. Now “he was trying to somehow control that experience,” Bennett says. “He was designing the last granule of what he had left: his death.”

In 2013, Bennett started sharing his ideas with the other partners at Ideo, selling them on death as an overlooked area of the culture where the firm could make an impact. He had a very unspecific, simple goal: “I don’t want death to be such a downer,” he told me. And he was undaunted by all the dourness humanity has built up around the experience over the last 200,000 years. “It’s just another design challenge,” he said. His ambition bordered on hubris, but generally felt too child-like, too obliviously joyful, to be unlikable. One time I heard him complain that death wasn’t “alive and sunny.”

T. S. Eliot, 1944:

I have suggested that the cultural health of Europe, including the cultural health of its component parts, is incompatible with extreme forms of both nationalism and internationalism. But the cause of that disease, which destroys the very soil in which culture has its roots, is not so much extreme ideas, and the fanaticism which they stimulate, as the relentless pressure of modern industrialism, setting the problems which the extreme ideas attempt to solve. Not least of the effects of industrialism is that we become mechanized in mind, and consequently attempt to provide solutions in terms of engineering, for problems which are essentially problems of life.

Ah, don’t be such a downer, Possum! Everybody, come on, sing along!

the keys to society and their rightful custodians

Recently Quentin Hardy, the outstanding technology writer for the New York Times, tweeted this:

If you follow the embedded link you’ll see that Head argues that algorithm-based technologies are, in many workplaces, denying to humans the powers of judgment and discernment:

I have a friend who works in physical rehabilitation at a clinic on Park Avenue. She feels that she needs a minimum of one hour to work with a patient. Recently she was sued for $200,000 by a health insurer, because her feelings exceeded their insurance algorithm. She was taking too long.

The classroom has become a place of scientific management, so that we’ve baked the expertise of one expert across many classrooms. Teachers need a particular view. In core services like finance, personnel or education, the variation of cases is so great that you have to allow people individual judgment. My friend can’t use her skills.

To Hardy’s tweet Marc Andreesen, the creator of the early web browser Mosaic and the co-founder of Netscape, replied,

Before I comment on that response, I want to look at another story that came across my Twitter feed about five minutes later, an extremely thoughtful reflection by Brendan Keogh on “games evangelists and naysayers”. Keogh is responding to a blog post by noted games evangelist Jane McGonigal encouraging all her readers to find people who have suffered some kind of trauma and get them to play a pattern-matching video game, like Tetris, as soon as possible after their trauma. And why wouldn’t you do this? Don’t you want to “HELP PREVENT PTSD RIGHT NOW”?

Keogh comments,

McGonigal … wants a #Kony2012-esque social media campaign to get 100,000 people to read her blog post. She thinks it irresponsible to sit around and wait for definitive results. She even goes so far as to label those that voice valid concerns about the project as “games naysayers” and compares them to climate change deniers.

The project is an unethical way to both present findings and to gather research data. Further, it trivialises the realities of PTSD. McGonigal runs with the study’s wording of Tetris as a potential “vaccine”. But you wouldn’t take a potential vaccine for any disease and distribute it to everyone after a single clinical trial. Why should PTSD be treated with any less seriousness? Responding to a comment on the post questioning the approach, McGonigal cites her own suffering of flashbacks and nightmares after a traumatic experience to demonstrate her good intentions (intentions which I do not doubt for a moment that she has). Yet, she wants everyone to try this because it might work. She doesn’t stop to think that one test on forty people in a controlled environment is not enough to rule out that sticking Tetris or Candy Crush Saga under the nose of someone who has just had a traumatic experience could potentially be harmful for some people (especially considering Candy Crush Saga is not even mentioned in the study itself!).

Further, and crucially, in her desire to implement this project in the real world, she makes no attempt to compare or contrast this method of battling PTSD with existing methods. It doesn’t matter. The point is that it proves games can be used for good.

If we put McGonigal’s blog post together with Andreesen’s tweet we can see the outlines of a very common line of thought in the tech world today:

1) We really earnestly want to save the world;

2) Technology — more specifically, digital technology, the technology we make — can save the world;

3) Therefore, everyone should eagerly turn over to us the keys to society.

4) Anyone who doesn’t want to turn over those keys to us either doesn’t care about saving the world, or hates every technology of the past 5000 years and just wants to go back to writing on animal skins in his yurt, or both;

5) But it doesn’t matter, because resistance is futile. If any expresses reservations about your plan you can just smile condescendingly and pat him on the head — “Isn’t that cute?” — because you know you’re going to own the world before too long.

And if anything happens go to astray, you can just join Peter Thiel on his libertarian-tech-floating-earthly-Paradise.

Seasteading

Enjoy your yurts, chumps.

Geoengineering: Falling with style

Brandon Keim at Wired has a short piece and a gallery called “6 Ways We’re Already Geoengineering Earth,” related to the new conference on geoengineering being held at Asilomar:

Scientists and policymakers are meeting this week to discuss whether geoengineering to fight climate change can be safe in the future, but make no mistake about it: We’re already geoengineering Earth on a massive scale.
From diverting a third of Earth’s available fresh water to planting and grazing two-fifths of its land surface, humankind has fiddled with the knobs of the Holocene, that 10,000-year period of climate stability that birthed civilization.
The point that humans are altering geophysical processes on a planetary scale is almost inarguable. But while this alteration is an aggregate effect of human engineering, it is not in any sense geoengineering. Geoengineering is the intentional alteration of geophysical processes on a planetary scale, while anthropogenic environmental change as it exists now occurs without such intent (either through ignorance or indifference).
Mr. Keim probably had no hidden agenda himself, but the attempt to blur a distinction of intent into a difference of degree is a common transhumanist move, and a seductively fallacious one. In the case of climate change, it can lead to advocacy for what amounts to fighting fire with fire. As I’ve argued before, the lesson we ought to learn from global warming is that humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them.
Just like a project to remake man, a project to remake the planet will have to be so advanced from today’s technology as to overcome what is at least now the truth of this lesson — but it will not do so by treating the project as essentially more of the same of what humankind has already done to the planet.

Transhuman Ambitions and the Lesson of Global Warming

Anyone who believes in the science of man-made global warming must admit the important lesson it reveals: humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them. Let’s call this (just for kicks) the Malcolm Principle. Our knowledge is little but our power is great, and so we must wield it with caution. Much of the continued denial of a human cause for global warming — beyond the skepticism merited by science — is due to a refusal to accept the truth of this principle and the responsibility it entails.


Lake Hamoun, 1976-2001,
courtesy UNEP

And yet a similar rejection of the Malcolm Principle is evident even among some of those who accept man’s role in causing global warming. This can be seen in the great overconfidence of climate scientists in their ability to understand and predict the climate. But it is far more evident in the emerging support for “geoengineering” — the notion that not only can we accurately predict the climate, but we can engineer it with sufficient control and precision to reverse warming.

It is unsurprising to find transhumanist support for geoengineering. Some advocates even support geoengineering to increase global warming — for instance, Tim Tyler advocates intentionally warming the planet to produce various allegedly beneficial effects. Here the hubris of rejecting the Malcolm Principle is taken to its logical conclusion: Once we start fiddling with the climate intentionally, why not subject it to the whims of whatever we now think might best suit our purposes? Call it transenvironmentalism.
In fact, name any of the most complex systems you can think of that were not created from the start as engineering projects, and there is likely to be a similar transhumanist argument for making it one. For example:
  • The climate, as noted, and thus implicitly also the environment, ecosystem, etc.
  • The animal kingdom, see e.g. our recent lengthy discussion on ending predation.
  • The human nutritional system, see e.g. Kurzweil.
  • The human body, a definitional tenet for transhumanists.
  • The human mind, similarly.
Transhumanist blogger Michael Anissimov (who earlier argued in favor of reengineering the animal kingdom) initially voiced support for intentional global warming, but later deleted the post. He defended his initial support with reference to Singularitarian Eliezer Yudkowsky’s “virtues of rationality,” particularly that of “lightness,” which Yudkowsky defines as: “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.” Yudkowsky’s list also acknowledges potential limits of rationality implicit in its virtues of “simplicity” and “humility”: “A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere,” and the humble are “Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.” Yet in addition to the “leaf in the wind” virtue, the list also contains “relinquishment”: “Do not flinch from experiences that might destroy your beliefs.”
Putting aside the Gödelian contradiction inherent even in “relinquishment” alone (if one should not hesitate to relinquish one’s beliefs, then one should also not hesitate to relinquish one’s belief in relinquishment), it doesn’t seem that one can coherently exercise all of these virtues at once. We live our lives interacting with systems too complex for us to ever fully comprehend, systems that have come into near-equilibrium as the result of thousands or billions of years of evolution. To take “lightness” and “relinquishment” as guides for action is not simply to be rationally open-minded; rather, it is to choose to reflexively reject the wisdom and stability inherent in that evolution, preferring instead the instability of Yudkowsky’s “leaf in the wind” and the brash belief that what we look at most eagerly now is all there is to see.
Imagine if, in accordance with “lightness” and “relinquishment,” we had undertaken a transhumanist project in the 19th century to reshape human heads based on the fad of phrenology, or a transenvironmentalist project in the 1970s to release massive amounts of carbon dioxide on the hypothesis of global cooling. Such proposals for systemic engineering would have been foolish not merely because of their basis in particular mistaken ideas, but because they would have proceeded on the pretense of comprehensively understanding systems they in fact could barely fathom. The gaps in our understanding mean that mistaken ideas are inevitable. But the inherent opacity of complex systems still eludes those who make similar proposals today: Anissimov, even in acknowledging the global-warming project’s irresponsibility, still cites but a single knowable mechanism of failure (“catastrophic global warming through methane clathrate release”), as if the essential impediment to the plan will be cleared as soon as some antidote to methane clathrate release is devised.
Other transhumanist evaluations of risk similarly focus on what transhumanism is best able to see — namely threats to existence and security, particularly those associated with its own potential creations — which is fine except that this doesn’t make everything else go away. There are numerous “catastrophic errors” wrought already by our failures to act with simplicity and humility — such as our failure to anticipate that technological change might have systemic consequences, as in the climate, environment, and ecosystem; and our tremendous and now clearly exaggerated confidence in rationalist powers exercised directly at the systemic level, as evident in the current financial crisis (see Paul Cella), in food and nutrition (see Michael Pollan and John Schwenkler), and in politics and culture (see Alasdair MacIntyre among many others), just for starters. But among transhumanists there is little serious contemplation of the implications of these errors for their project. (As usual, commenters, please provide me with any counterexamples.)
Perhaps Yudkowsky’s “virtues of rationality” are not themselves to be taken as guides to action. But transhumanism aspires to action — indeed, to revolution. To recognize the consequences of hubris and overreach is not to reject reason in favor of simpleminded tradition or arbitrary givenness, but rather to recognize that there might be purpose and perhaps even unspoken wisdom inherent in existing stable arrangements — and so to acknowledge the danger and instability inherent in the particular hyper-rationalist project to which transhumanists are committed.