who hacks the planet?

Eli Kintisch’s 2010 book Hack the Planet explores the rise of geoengineering as a response to global warming: Since human beings are apparently unwilling to change their behavior in order to avoid unfortunate effects on the planet’s ecosystem, why not then change the way the planet responds to our behavior?

But the chief problem with hacking the planet is that you’d be hacking the planet, and, as Kintisch pointed out in a related article, it’s hard to envision ways of testing planet-hacks before employing them. You can’t really release sunlight-blocking aerosols in one unobtrusive corner of the atmosphere to see what they do. In the end, if such strategies are deployed — as David Keith of Harvard in a new book says they must be — then someone is going to have to bite the bullet and attempt on a huge scale an endeavor whose results will be pretty unpredictable.

And as Kintisch notes in a brief review of Keith’s book, geoengineering could be the source of major international conflicts in the 21st century:

solar geoengineering could be a major geopolitical issue in the 21st century, akin to nuclear weapons during the 20th—and the politics could, if anything, be even trickier and less predictable. The reason is that compared with acquiring nuclear weapons, the technology is relatively easy to deploy. “Almost any nation could afford to alter the Earth’s climate,” Keith writes. That fact, he says, “may accelerate the shifting balance of global power, raising security concerns that could, in the worst case, lead to war.”


The potential sources of conflict are myriad. Who will control Earth’s thermostat? What if one country blames geoengineering for famine-inducing droughts or devastating hurricanes? No treaties ban climate engineering explicitly. And it’s not clear how such a treaty would operate. […]

Accepting the concept of the Anthropocene means accepting that humans have the responsibility to find technological fixes for disasters they have created. But little progress has been made toward a process for rationally supervising such activity on a global scale. We need a more open discussion about a seemingly outlandish but real geopolitical risk: war over climate engineering.

I think here of Robert Oppenheimer’s notorious line: “When you see something that is technically sweet, you go ahead and do it and argue about what to do about it only after you’ve had your technical success.” To a lot of scientists planet-hacking looks technically very sweet indeed, and no doubt they’ll be able to find politicians to agree with them. But which country will be the quickest to release its geoengineers to do their thing? Being first-to-market in planet-hacking may not be a good thing — for those of us who’re getting hacked.

There Is No ‘Undo’ Button for the Singularity

As a matter of clearing up the record, I’d like to point out a recent post by Michael Anissimov in which he points out that his blog’s server is still infested with malware. The post concludes:

I don’t know jack about viruses or how they come about. I suppose The New Atlantis will next be using that as evidence that a Singularity will never happen. Oh wait — they already did.

[UPDATE: Mr. Anissimov edited the post without noting it several times, including removing this snarky comment, and apparently, within the last hour or two, deleting the post entirely; see below.]Mr. Anissimov is referring to two posts of mine, “Transhumanist Tech Failures” and “The Disinformation Campaign of Transhumanist ‘Caution’.” But even a passing glance at either of these posts will show that I never used this incident as evidence that the Singularity will never happen. Instead, it should be clear that I used it, rather opportunistically, to point out the embarrassing fact that the hacking of his site ironically reveals the deep foolhardiness of Mr. Anissimov’s aspirations. Shameless, I know.It’s not of mere passing significance that Mr. Anissimov admits here that he “[doesn’t] know jack about viruses or how they come about”! You would think someone who is trying to make his name on being the “responsible” transhumanist, the one who shows up the need to make sure AI is “friendly” instead of “unfriendly,” would realize that, if ever there comes into existence such a thing as unfriendly AI — particularly AI intentionally designed to be malicious — computer viruses will have been its primordial ancestor, or at least its forerunner. Also, you would think he would be not just interested in but actually in possession of a deep and growing knowledge of the practical aspects of artificial intelligence and computer security, those subjects whose mastery are meant to be so vital to our future.I know we Futurisms guys are supposedly Luddites, but (although I prefer to avoid trotting this out) I did in fact graduate from a reputable academic computer science program, and in it studied AI, computer security, and software verification. Anyone who properly understands even the basics of the technical side of these subjects would laugh at the notion of creating highly complex software that is guaranteed to behave in any particular way, particularly a way as sophisticated as being “friendly.” This is why we haven’t figured out how to definitively eradicate incomparably more simple problems — like, for example, ridding malware from servers running simple blogs.The thing is, it’s perfectly fine for Mr. Anissimov or anyone else who is excited by technology not to really know how the technology works. The problem comes in their utter lack of humility — their total failure to recognize that, when one begins to tackle immensely complex “engineering problems” like the human mind, the human body, or the Earth’s biosphere, little errors and tweaks in the mind, gaps in your knowledge that you weren’t even aware of, can translate into chaos and catastrophe when they are actually applied. Reversing an ill-advised alteration to the atmosphere or the human body or anything else isn’t as easy as deleting content from a blog. It’s true that Mr. Anissimov regularly points out the need to act with caution, but that makes it all the more reprehensible that he seems so totally disinclined to actually so act.—Speaking of deleting content from a blog: there was for a while a comment on Mr. Anissimov’s post critical of his swipe at us, and supportive of our approach if not our ideas. But he deleted it (as well as another comment referring to it). He later deleted his own jab at our blog. And sometime in the last hour or two, he deleted the post entirely. All of these changes were done without making any note of them, as if he hopes his bad ideas can just slide down the memory hole.We can only assume that he has seen the error of his ways, and now wants to elevate the debate and stick to fair characterizations of the things we are saying. That’s welcome news, if it’s true. But, to put it mildly, silent censorship is a fraught way to conduct debate. So, for the sake of posterity, we have preserved his post here exactly as it appeared before the changes and its eventual deletion. (You can verify this version for yourself in Yahoo’s cache until it updates.)—A final point of clarification: We here on Futurisms are actually divided on the question of whether the Singularity will happen. I think it’s fair to say that Adam finds many of the broad predictions of transhumanism basically implausible, while Charlie finds many and I find a lot of them at least theoretically possible in some form or another.But one thing we all agree on is that the Singularity is not inevitable — that, in the words of the late computer science professor and artificial intelligence pioneer Joseph Weizenbaum, “The myth of technological and political and social inevitability is a powerful tranquilizer of the conscience. Its service is to remove responsibility from the shoulders of everyone who truly believes in it.”Rather, the future is always a matter of human choices; and the point of this blog is that we think the possibility of humans choosing to bring about the Singularity would be a pretty bad one. Why? We’ve discussed that at some length, and we will go on doing so. But a central reason has to be practical: if we can’t keep malware off of a blog, how can we possibly expect to be able to maintain the control we want when our minds, and every aspect of our society, is so subject to the illusion of technical mastery?With that in mind, we have much, much more planned to say in the days, weeks, and months ahead, and we look forward to getting back to a schedule of more frequent posting now that we’re clearing a few major deadlines off our plates.

The Disinformation Campaign of Transhumanist “Caution”

In my last post on ironic transhumanist tech failures, there was one great example I forgot to mention. If you subscribe to the RSS feed for the IEET blog, you may have noticed that most of their posts go up on the feed multiple times: my best guess is that, due to careless coding in their system (or a bad design idea that was never corrected), a post goes up as new on the feed every time it’s even modified. For example, here’s what the feed’s list of posts from early March looks like:
Ouch — kind of embarrassing. Every project has technical difficulties, of course, but — well, here’s another example:
Question: can we develop and test machine minds and uploads ethically? Well, one way to get at that question is to ask what it might say about technical fallibility when such a prominent transhumanist advocacy organization has not yet figured out how to eliminate inadvertent duplicates on its RSS feed, and how such an error might play out when, say, uploading a mind, where the technical challenges are a bit more substantial, and the consequences of accidentally creating copies a bit more tricky.
Don’t get me wrong — we all know that the IEET is all about being Very Serious and Handling the Future Responsibly. I mean, look, they’re taking the proper precaution of thinking through the ethics of mind uploading long before that’s even possible! Let’s have a look at that post:

Sometimes people complain that they “did not ask to be born.” Yet, nobody has an ethical right to decide whether or not to be born, as that would be temporally illogical. The solution to this conundrum is for someone else to consent on behalf of the newborn, whether this is done implicitly via biological parenting, or explicitly via an ethics committee.

Probably the most famous example of the “complaint” Ms. Rothblatt alludes to comes from Kurt Vonnegut’s final novel, Timequake, in which he depicts Hitler uttering the words, “I never asked to be born in the first place,” before shooting himself in the head. It doesn’t seem that either fictional-Hitler’s or real-Vonnegut’s complaint was answered satisfactorily by their parents’ “implicit biological consent” to their existence. And somehow it’s hard to imagine that either man would have been satisfied if an ethics committee had rendered the judgment instead.
Could Vonnegut (through Hitler) be showing us something too dark to see by looking directly in its face? Might these be questions for which we are rightly unable to offer easy answers? Is it possible that those crutches of liberal bioethics, autonomy and consent, are woefully inadequate to bear the weight of such fundamental questions? (Might it be absurd, for example, to think that one can write a loophole to the “temporal illogicality” of consenting to one’s own existence by forming a committee?) Apparently not: Rothblatt concludes that “I think practically speaking the benefits of having a mindclone will be so enticing that any ethical dilemma will find a resolution” and “Ultimately … the seeming catch-22 of how does a consciousness consent to its own creation can be solved.” Problem solved!
In a similar vein, in response to my shameless opportunism in my last post in pointing out the pesky ways that technical reality undermines technological fantasy, Michael Anissimov commented:

In my writings, I always stress that technology fails, and that there are great risks ahead as a result of that. Only transhumanism calls attention to the riskiest technologies whose failure could even mean our extinction.

True enough. Of course, only transhumanism so gleefully advocates the technologies that could mean our extinction in the first place… but it’s cool: after his site got infested by malware for a few days, Anissimov got Very Serious, decided to Think About the Future Responsibly, and, in a post called “Security is Paramount,” figured things out:

For billions of years on this planet, there were no rules. In many places there still are not. A wolf can dine on the entrails of a living doe he has brought down, and no one can stop him. In some species, rape is a more common variety of impregnation than consensual sex…. This modern era, with its relative orderliness and safety, at least in the West, is an aberration. A bizarre phenomenon, rarely before witnessed in our solar system since its creation…. Reflecting back on this century, if we survive, we will care less about the fun we had, and more about the things we did to ensure that the most important transition in history went well for the weaker ambient entities involved in it. The last century didn’t go too well for the weak — just ask the victims of Hitler and Stalin. Hitler and Stalin were just men, goofballs and amateurs in comparison to the new forms of intelligence, charisma, and insight that cognitive technologies will enable.

Hey, did you know nature is bad and people can be pretty bad too? Getting your blog knocked offline for a few days can inspire some pretty cosmic navel-gazing. (As for the last part, though, it shouldn’t be a worry, as Hitler+ and Stalin+ will have had ethics committees who consented to their existences, and all their existential issues thereby solved.)
The funny thing about apocalyptic warnings like Anissimov’s is that they don’t seem to do a mite to slow down transhumanists’ enthusiasm for new technologies. Notably, despite his Serious warnings, Anissimov doesn’t even consider the possibility that the whole project might be ill-begotten. In fact, despite implicitly setting himself outside and above them, Anissimov is really one of the transhumanists he describes in the same post, who “see nanotechnology, life extension, and AI as a form of candy, and reach for them longingly.” This is because, for all his lofty rhetoric of caution, he is still fundamentally credulous when it comes to the promise of transformative new technologies.
Take geoengineering: in Anissimov’s first post on the subject, he cheered the idea of intentionally warming the globe for certain ostensible benefits. Shortly later, he deleted the post “because of substantial uncertainty on the transaction costs and the possibility of catastrophic global warming through methane clathrate release.” It took someone pointing to a specific, known vector of possible disaster for him to reconsider; otherwise, only a few minutes’ thought given to what would be the most massive engineering project in human history was sufficient to declare it just dandy.
Of course, in real life, unlike in blogging, you can’t just delete your mistakes — say, releasing huge amounts of chemicals into the atmosphere that turn out to be harmful (as we’re learning today when it comes to carbon emissions). Nor did it occur to Anissimov that the one area on which he will readily admit concern about the potential downsides of future technologies — security — might also be an issue when it comes to granting the power to intentionally alter the earth’s climate to whoever has the means (whether they’re “friendly” or not).
One could go on at great length about the unanticipated consequences of transhumanism-friendly technologies, or the unseriousness of most pro-transhumanist ethical inquiries into those technologies. These points are obvious enough.
What is more difficult to see is that Michael Anissimov, Martine Rothblatt, and all of the other writers who proclaim themselves the “serious,” “responsible,” and “precautious” wing of the transhumanist party — including Eliezer Yudkowsky, Nick Bostrom, and Ray Kurzweil, among others — in fact function as a sort of disinformation campaign on behalf of transhumanists. They toss out facile work that calls itself serious and responsible, capable of grasping and dealing with the challenges ahead, when it could hardly be any less so — but all that matters is that someone says they’re doing it.
Point out to a transhumanist that they are as a rule uninterested in deeply and seriously engaging with the ramifications of the technologies they propose, or suggest that the whole project is more unfathomably reckless than any ever conceived, and they can say, “but look, we are thinking about it, we’re paying our dues to caution — don’t worry, we’ve got people on it!” And with their consciences salved, they can go comfortably back to salivating over the future.

Geoengineering: Falling with style

Brandon Keim at Wired has a short piece and a gallery called “6 Ways We’re Already Geoengineering Earth,” related to the new conference on geoengineering being held at Asilomar:

Scientists and policymakers are meeting this week to discuss whether geoengineering to fight climate change can be safe in the future, but make no mistake about it: We’re already geoengineering Earth on a massive scale.
From diverting a third of Earth’s available fresh water to planting and grazing two-fifths of its land surface, humankind has fiddled with the knobs of the Holocene, that 10,000-year period of climate stability that birthed civilization.
The point that humans are altering geophysical processes on a planetary scale is almost inarguable. But while this alteration is an aggregate effect of human engineering, it is not in any sense geoengineering. Geoengineering is the intentional alteration of geophysical processes on a planetary scale, while anthropogenic environmental change as it exists now occurs without such intent (either through ignorance or indifference).
Mr. Keim probably had no hidden agenda himself, but the attempt to blur a distinction of intent into a difference of degree is a common transhumanist move, and a seductively fallacious one. In the case of climate change, it can lead to advocacy for what amounts to fighting fire with fire. As I’ve argued before, the lesson we ought to learn from global warming is that humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them.
Just like a project to remake man, a project to remake the planet will have to be so advanced from today’s technology as to overcome what is at least now the truth of this lesson — but it will not do so by treating the project as essentially more of the same of what humankind has already done to the planet.

Transhuman Ambitions and the Lesson of Global Warming

Anyone who believes in the science of man-made global warming must admit the important lesson it reveals: humans can easily alter complex systems not of their own cohesive design but cannot easily predict or control them. Let’s call this (just for kicks) the Malcolm Principle. Our knowledge is little but our power is great, and so we must wield it with caution. Much of the continued denial of a human cause for global warming — beyond the skepticism merited by science — is due to a refusal to accept the truth of this principle and the responsibility it entails.

Lake Hamoun, 1976-2001,
courtesy UNEP

And yet a similar rejection of the Malcolm Principle is evident even among some of those who accept man’s role in causing global warming. This can be seen in the great overconfidence of climate scientists in their ability to understand and predict the climate. But it is far more evident in the emerging support for “geoengineering” — the notion that not only can we accurately predict the climate, but we can engineer it with sufficient control and precision to reverse warming.

It is unsurprising to find transhumanist support for geoengineering. Some advocates even support geoengineering to increase global warming — for instance, Tim Tyler advocates intentionally warming the planet to produce various allegedly beneficial effects. Here the hubris of rejecting the Malcolm Principle is taken to its logical conclusion: Once we start fiddling with the climate intentionally, why not subject it to the whims of whatever we now think might best suit our purposes? Call it transenvironmentalism.
In fact, name any of the most complex systems you can think of that were not created from the start as engineering projects, and there is likely to be a similar transhumanist argument for making it one. For example:
  • The climate, as noted, and thus implicitly also the environment, ecosystem, etc.
  • The animal kingdom, see e.g. our recent lengthy discussion on ending predation.
  • The human nutritional system, see e.g. Kurzweil.
  • The human body, a definitional tenet for transhumanists.
  • The human mind, similarly.
Transhumanist blogger Michael Anissimov (who earlier argued in favor of reengineering the animal kingdom) initially voiced support for intentional global warming, but later deleted the post. He defended his initial support with reference to Singularitarian Eliezer Yudkowsky’s “virtues of rationality,” particularly that of “lightness,” which Yudkowsky defines as: “Let the winds of evidence blow you about as though you are a leaf, with no direction of your own.” Yudkowsky’s list also acknowledges potential limits of rationality implicit in its virtues of “simplicity” and “humility”: “A chain of a thousand links will arrive at a correct conclusion if every step is correct, but if one step is wrong it may carry you anywhere,” and the humble are “Those who most skillfully prepare for the deepest and most catastrophic errors in their own beliefs and plans.” Yet in addition to the “leaf in the wind” virtue, the list also contains “relinquishment”: “Do not flinch from experiences that might destroy your beliefs.”
Putting aside the Gödelian contradiction inherent even in “relinquishment” alone (if one should not hesitate to relinquish one’s beliefs, then one should also not hesitate to relinquish one’s belief in relinquishment), it doesn’t seem that one can coherently exercise all of these virtues at once. We live our lives interacting with systems too complex for us to ever fully comprehend, systems that have come into near-equilibrium as the result of thousands or billions of years of evolution. To take “lightness” and “relinquishment” as guides for action is not simply to be rationally open-minded; rather, it is to choose to reflexively reject the wisdom and stability inherent in that evolution, preferring instead the instability of Yudkowsky’s “leaf in the wind” and the brash belief that what we look at most eagerly now is all there is to see.
Imagine if, in accordance with “lightness” and “relinquishment,” we had undertaken a transhumanist project in the 19th century to reshape human heads based on the fad of phrenology, or a transenvironmentalist project in the 1970s to release massive amounts of carbon dioxide on the hypothesis of global cooling. Such proposals for systemic engineering would have been foolish not merely because of their basis in particular mistaken ideas, but because they would have proceeded on the pretense of comprehensively understanding systems they in fact could barely fathom. The gaps in our understanding mean that mistaken ideas are inevitable. But the inherent opacity of complex systems still eludes those who make similar proposals today: Anissimov, even in acknowledging the global-warming project’s irresponsibility, still cites but a single knowable mechanism of failure (“catastrophic global warming through methane clathrate release”), as if the essential impediment to the plan will be cleared as soon as some antidote to methane clathrate release is devised.
Other transhumanist evaluations of risk similarly focus on what transhumanism is best able to see — namely threats to existence and security, particularly those associated with its own potential creations — which is fine except that this doesn’t make everything else go away. There are numerous “catastrophic errors” wrought already by our failures to act with simplicity and humility — such as our failure to anticipate that technological change might have systemic consequences, as in the climate, environment, and ecosystem; and our tremendous and now clearly exaggerated confidence in rationalist powers exercised directly at the systemic level, as evident in the current financial crisis (see Paul Cella), in food and nutrition (see Michael Pollan and John Schwenkler), and in politics and culture (see Alasdair MacIntyre among many others), just for starters. But among transhumanists there is little serious contemplation of the implications of these errors for their project. (As usual, commenters, please provide me with any counterexamples.)
Perhaps Yudkowsky’s “virtues of rationality” are not themselves to be taken as guides to action. But transhumanism aspires to action — indeed, to revolution. To recognize the consequences of hubris and overreach is not to reject reason in favor of simpleminded tradition or arbitrary givenness, but rather to recognize that there might be purpose and perhaps even unspoken wisdom inherent in existing stable arrangements — and so to acknowledge the danger and instability inherent in the particular hyper-rationalist project to which transhumanists are committed.