Who’s Afraid of ‘Brave New World’?

I was very happy to learn from George Dvorsky at io9 that Aldous Huxley’s novel “Brave New World is not the terrifying dystopia it used to be.” It’s not that the things in the novel couldn’t happen (more or less), but rather that they are happening and “we” have become much more enlightened and simply don’t need to worry about them anymore. The book is a product of its time, and our time understands these things much better, apparently.

Thus, the strongest condemnation Dvorsky can offer of the eugenics program depicted in Brave New World is that it is “disquieting.” But we can get over that. Genetic engineering techniques that might have once have been met with “repugnance” are now commonplace. Newer techniques which promise more control to make more things like those in Brave New World possible will have the same fate, Dvorsky expects, trotting out the number-one cliché of progressive bioethics: “While potentially alarming, these biotechnologies and others currently in development hold great promise.” And on the basis of that “great promise” we merrily slide right down the slippery slope:

Advances in genetics will serve to eliminate a host of genetic diseases, while offering humans the opportunity to forgo the haphazard genetic roll of the dice when it comes to determining the traits of offspring. A strong case can be made that it’s both our duty and right to develop these technologies.

Problem solved!

The next non-problem Dvorsky sees in Brave New World is totalitarianism, which, along with “top-down” eugenics, he proclaims “dead.” Happy day! One might trust Dvorsky more on this topic if he did not declare that even in Huxley’s own time the book conveyed a “false sense of urgency” on this topic. But we now know that biotechnology will be “tools made by the people, for the people.” A case in point, I suppose, would be the drug company that just raised the price of one of its pills by 5000 percent.

And on and on. Concern about widespread use of psychoactive prescription and non-prescription drugs is, Dvorsky says, either “not entirely fair” or “hysterical.” On sex and the family Huxley’s “prescience is remarkable” but his concerns  are “grossly old fashioned and moralizing.” So too his Malthusian concerns are “grossly overstated” particularly when population control (apparently it is necessary after all) can be achieved by “humanitarian methods.”

So there you have it. It seems that for the advocates of technological “progress” and human redesign “don’t worry, be happy” has become a respectable line of argument. I know I feel much better now.

Civil Rights, Eugenics, and Why It’s “Being a Good Human” to Kill Your Daughters

As Adam very kindly described, I appeared on Al Jazeera’s The Stream last week to talk about transhumanism with George Dvorsky and Robin Hanson. (Thanks to both the producers and my interlocutors for an enjoyable chat.) I’d like to expand upon a subject I mentioned on the show. Back in January, Prof. Hanson expressed support on his “Overcoming Bias” blog for sex selection — that is, selective abortion of female fetuses based on their gender. His reasoning was:

if male lives are more pleasant overall, it is good that we create more of them instead of female lives. Yes, supply and demand may eventually equalize the quality of male and female lives, but until then why not have moves [more] lives that are more pleasant?

I took the opportunity to ask Prof. Hanson about this on the air (my comments start around 14:45, and his response is at 16:30). Here is how he replied:

He’s right that that’s what I said, and I meant it. But we’re talking about individual private choice. We can think about parents choosing children, choosing high-IQ versus low-IQ children, choosing athletic versus less athletic children. I think it’s good if parents have the best interest of their children at heart, and choose children that they think will have better lives. I think that goes to the center of humanity; it goes to the center of being a good human — wanting the best for your children.

Reported Sex Ratios at Birth
and Sex Ratios of the Population Age 0-4:
China, 1953-2005 (boys per 100 girls)
Year Sex Ratio
at Birth
Sex Ratio,
Age 0-4
1953 107.0
1964 105.7
1982 108.5 107.1
1990 111.4 110.2
1995 115.6 118.4
1999 117.0 119.5
2005 118.9 122.7

This sounds sensible and compassionate for about half a second, until one realizes what it means: “having the best interest of your child at heart” means not allowing her to exist or killing her because she’s a girl. Tempting though it is, however, there are more clarifying ways to understand this issue than through the abortion debate — or through the trivial extension of Hanson’s logic to justify killing girls long after birth.Commentators on sex selection have been right to talk about the issue as in part one of women’s rights, since this is almost entirely a phenomenon directed against girls, with some 160 million worldwide barred from life due to being female. Whether you consider these to be actual lives or potential lives lost, the fact is that these societies are deeming women less worthy than men by increasingly preventing them from even entering into this world. Not in the least coincidentally, this happens overwhelmingly in countries where women are considered inferior to men, where they often lack basic rights like voting, driving, and full ownership of property, and where not only women but girls are frequently forced into labor, marriage, and prostitution. If nothing else, Hanson is right that, in these countries, women’s lives are generally a lot less pleasant than men’s.

Differing approaches to social uplift

Consider for a moment: what direction would Hanson’s arguments have pushed us in had they been made during past struggles for equality and civil rights? Women had to struggle for rights here in the United States, too — to gain the right to vote, and then later to gain equality in the workplace and in the broader culture. Women’s lives could have been considered a lot less “pleasant” than men’s at these times, too.Had Hanson and sex-selective technology been around at the time, his prescription would have been not to change laws, attitudes, and culture to bring a class of people out of oppression — but to just get rid of those people. This is exactly what Hanson is prescribing and celebrating in countries where women are abused and oppressed today.One can imagine how Hanson’s prescription would have applied to still other civil rights struggles from America’s past. And not just in imagination: the idea that certain classes of people had lives that were less worth living — either based on race, or, just as in Hanson’s criteria, strength and intelligence — was in fact the rationale behind eugenics programs that sought to eliminate those lives. Other practices recently proposed and praised by transhumanists include infanticide, compulsory drugging of populations to make them more “moral”, and massive programs of engineering the human race to control their greenhouse gas emissions.The path of moral progress we moderns tell ourselves we have been forging is toward a society of ever greater justice and equality, in which the individual cannot be denied her place by the prejudices of others, in which the weak are protected from the strong. Transhumanists, utilitarians, and self-anointed rationalists insist that they are dedicated to pushing us further down the path of enlightenment — toward “Overcoming Bias.” They insist that their dreams, when realized, will be a vehicle of moral progress and individual empowerment — the repudiation rather than the continuation of the twentieth century’s programs of social coercion. Isn’t it pretty to think so?

Arguing with Transhumanists

Yesterday, our co-blogger and New Atlantis senior editor Ari Schulman discussed transhumanism on The Stream, a social-media-based show on Al Jazeera English. Hosts Imran Garda and Malika Bilal did a good job of kicking off the discussion, and plenty of viewers commented and asked questions in real-time via Twitter. Several video clips were interspersed throughout the show, including a snippet of Regan Brashear’s documentary Fixed, which we previously discussed here on Futurisms.Ari debated two outspoken advocates of transhumanism*: Robin Hanson, a professor at George Mason University (whom we have frequently written about here), and George Dvorsky, a blogger and activist. If that sounds unfairly lopsided to you — two against one — well, it was unfairly lopsided: Ari clearly had the better of the conversation.The conversation touched on many subjects, and there wasn’t time to deal with anything in great depth, but I’d like to highlight three items.First, Ari pointed out on the show something that Hanson said recently — that “if male lives are more pleasant overall, it is good that we create more of them instead of female lives.” (Hanson wrote this in response to a New Atlantis article; we blogged about it here.) When confronted with his own words, Hanson didn’t retreat; he stood by those remarks. Today, one of Hanson’s blog readers took him to task: “You totally let yourself look like you’d support sexism…. You made us look bad and … I doubt you’ll have an opportunity to repair the damage your mistake caused.” I certainly agree that Hanson’s comments make transhumanism look bad — not because he misspoke or misrepresented his views, but because his forthright comments revealed the heartless calculation that underlies much transhumanist thinking.Second, Dvorsky and Hanson both objected to one of Ari’s comments: that transhumanism shares with the twentieth century’s eugenics movement a deep dissatisfaction with human nature. When we sometimes make this comparison, transhumanists accuse us of smearing them — after all, who would want to be compared to a movement that was responsible for forced sterilizations and that inspired some of the worst Nazi atrocities? But Ari’s remarks were measured and careful, and the comparison is apt: both eugenics and transhumanism are rooted in a profound dissatisfaction with evolved human nature. That does not mean (as Dvorsky claimed) that we think that human nature as it now exists is perfect. To the contrary, we think that human beings are flawed, and some of us might even say fallen, creatures. But for this very reason, as Ari said, we are skeptical of grand schemes that promise or pursue perfection.Dvorsky also bridled against the comparison to eugenics for another reason. He said that eugenics was a “top-down imposition,” wherein terrible decisions were made by “either the state or certain groups in power.” By contrast, Dvorsky said,

transhumanism is absolutely opposed to any of those ideas. In fact, it’s very much a hands-off type of a philosophy. If anything, it’s bottom-up, where we give the benefit of the doubt to individuals who are informed individuals, in conjunction with their doctors, their fertility clinics, and so on, who will make the decisions that are right for themselves. So everything from their reproductive rights, their morphological rights, and their cognitive rights as well.

But as Ari rightly noted on the show, not all transhumanist proposals pleasantly envision free, autonomous individuals pursuing the good as they see it. Julian Savulescu, for example, recently proposed that we should compel people to take behavior-altering drugs to make them more “moral” (as our colleague Brendan Foht mentioned here last month). And just because Dvorsky and some of his confreres think that the transhumanist future will be “hands-off” and “bottom-up” doesn’t mean that it actually will be. Who’s to say that we won’t see dictatorships of (or backed up by) Unfriendly AI? And even if somehow the transhumanist future were accomplished without obvious coercion, that doesn’t mean (as we have pointed out many times here on Futurisms) that “individuals who are informed individuals” would be free to abjure the enhancements that society is pressuring them to accept.All in all, a fine television performance by Ari; anyone interested in hearing more such intelligent criticism of transhumanism should poke around here on Futurisms and read some of the articles we’ve linked to the right.* To be clear, Hanson doesn’t consider himself a transhumanist, and during the program he said that he thinks “it’s somewhat premature to either advocate for or oppose these changes, because we don’t actually know very much about the context in which they’ll appear.” But since he is a vocal proponent of cryonics and he believes that many of the things that transhumanists embrace are at least plausible and in some cases desirable, I think it’s not unfair to put him on the transhumanist side of these debates.UPDATE: See Ari’s follow-up on his exchange with Robin Hanson about sex selection.

The Master Stumpeth

[A few more posts about last weekend’s H+ Summit at Harvard.]

The last and keynote speaker of the 2010 H+ Summit was, of course, the big daddy of transhumanism, Ray Kurzweil (bio, on-the-fly transcript).

Kurzweil, batting clean-upIn introducing him, the organizers noted that he flew into town that morning from Colorado, where he was filming his movie, and that he would be zipping out from the conference right after his talk to catch a flight to Los Angeles. This little detail is pretty emblematic of the conference in general: whereas Kurzweil hovered around last year’s Singularity Summit and descended intermittently to comment upon it like the head priest issuing edicts to his votaries, here he attended none of the conference and just stopped by to deliver his stump speech and head back out.

Given that this is the main event, I should probably try to outline it in detail, but just like his talk at SingSum, there was neither any core message to this talk nor anything remotely new about it. He hits all of his standard talking points. And I don’t just mean the same themes, but the very same details he lays out in The Singularity Is Near: the same graphs about Moore’s Law and about the exponential progress of technology in general and various technologies in specific. The main reason for his being here is his celebrity, it seems. Though he does have the shiniest slides of anyone here; his presentation is polished, if not new or focused.

In keeping with Kurzweil’s own unfocused approach, here are a few random notes about the talk and follow-up Q&A:

  • — I wasn’t the only one underwhelmed by Ray the K’s presentation. George Dvorsky, a conference presenter, tweeted about how it was all boilerplate. Tweeter Samuel H. Kenyon complained about the warm reception: “Seriously people, why does Kurzweil deserve a standing ovation but the other presenters don’t? Idol worshiping is not my bag.” The best tweet was a tweak, joking about Kurzweil’s obsession with exponential curves: “Why is this talk now not 5 minutes long and 1000 times as interesting as it was 5 years ago?”
  • — Here’s Kurzweil on human DNA: “We’re walking around with software — and this is not a metaphor, it’s very literal — that we’ve had for thousands or millions of years.” My jaw was on the floor. Literally, not metaphorically.
  • — Kurzweil is working on a book about reverse-engineering the brain, called How the Mind Works and How to Build One (see the Singularity Hub’s recent article on this). Someone alert Steven Pinker that he’s been one-upped. Also, this literal/metaphorical biological software business doesn’t bode well for the metaphysical clarity of this book.
  • — He makes an important admission, which is that there is no scientific test we can conceive of to determine whether an entity is conscious — and this means in particular that the Turing Test does not definitively demonstrate consciousness. His conclusion is that consciousness may continue to elude our philosophical understanding, and we should just set those questions aside and focus on what we can practically do.
  • — He claims that we are “not going to be transcending our humanity, we’re going to be transcending our biology.” Uh oh, they’re going to need to add a few items to the agenda for the next staff meeting:
  • (1) Time to change the name to “transbiologism”? And “H+” to “B+”?
    (2) Figure out how in the world humanity is separate from its biology.
    (3) Come up with a plan to deal with some very put-out materialists.
  • — As part of the great transhumanist benevolence outreach, Kurzweil makes the bold claim that “old people are people too.” Of course, what this really means — aside from “if you at all question the wisdom of extreme longevity, then you hate old people” — is “we should turn our revulsion for getting old into pity for the elderly.” Somehow I don’t think respecting the dignity of the elderly as we do the young and able-bodied is really what he’s getting at here.

And that’s it for the last presentation of the 2010 H+ Summit. Stay tuned for a couple of wrap-up posts.

Day 2 at H+ Summit: George Dvorsky gets serious

The 2010 H+ Summit is back underway here at Harvard, running even later than yesterday. After the first couple of talks, the conference launches into a more philosophical block, which promises a break in the doldrums of most of these talks so far. First up in this block is George Dvorsky (bio, slides, on-the-fly transcript), who rightly notes that ethical considerations have largely gone unmentioned so far at this conference. And how. He also notes in a tweet that “The notion that ethicists are not needed at a conference on human enhancement is laughable.” Hear hear.
Dvorsky’s presentation is primarily concerned with machine consciousness, and ensuring the rights of new sentient computational lifeforms. He’s not talking about robots, he says, like the ones we have today that are not sentient but are anthropomorphized to evoke our responses as if they were. (Again, see Caitrin Nicol in The New Atlantis on this subject.) Dvorsky posits that these robots have no moral worth. For example, he says, you may have seen this video before — footage of a robot that looks a bit like a dog and is subjected to some abuse:
Even though many people want to feel sorry for the robot when it gets kicked, Dvorsky says, they shouldn’t, because it has no moral worth. Only things with subjective awareness have moral worth. I’d agree that moral worth doesn’t inhere in such a robot. But as for subjective awareness as the benchmark, what about babies and the comatose, even the temporarily comatose? Do they have any moral worth? Also, it is not a simple matter to say that we shouldn’t feel sorry for the robot even if it doesn’t have moral worth. Isn’t it worth considering the effects on ourselves when we override our instincts and intuitions for empathy toward what seem to be other beings, however aptly-directed those feelings may be? Is protecting the rights of others entirely a matter of our rational faculties?
Dvorsky continues by describing problems raised by advancing the moral rights of machines. One, he says, is human exceptionalism. (And here the notion of human dignity gets its first brief mention at the conference.) Dvorsky derides human exceptionalism as mere “substrate chauvinism” — the idea that you must be made of biological matter to have rights.
He proposes that conscious machines be granted the same rights as human beings. Among these rights, he says, should be the right not to be shut down, and to own and control their own source code. But how does this fit in with the idea of “substrate chauvinism”? I thought the idea was that substrate doesn’t matter. If it does matter — to the extent that these beings have special sorts of rights like owning their own source code that not only don’t apply but have no meaning for humans — doesn’t this mean that there is some moral difference for conscious machines that must be accounted for rather than scoffed off with the label “substrate chauvinism”?
George Dvorsky has a lot of work to do with resolving incoherences in his approach to these questions. But he deserves credit for trying, and for offering the first serious, thoughtful talk at this conference. The organizers should have given far more emphasis and time to presenters like him. Who knows how many of the gaps in Dvorsky’s argument might have been filled if he had been given more than the ten-minute slot that they’re giving everybody else here with a project to plug.