Singularity Summit videos

Videos of the talks from the 2009 Singularity Summit, which we covered extensively on this blog, are now available here. A few videos are still missing, but most of them are up.

The best videos (IMHO, as the kids say) are:
  • David Chalmers on principles of simulation and the Singularity (video / post)
  • Peter Thiel making the economic case for the Singularity (video / post)
  • And the discussion with Stephen Wolfram on the Singularity at the cosmic scale (video / post)
Also worthwhile, revealing, or at least entertaining:
  • Brad Templeton’s talk was one of the most entertaining, ambitious, and plausible; the audience question segment was also particularly good (video / post)
  • Juergen Schmidhuber’s talk on digitizing creativity was lively and engaging, if silly (video / post)
  • The segment of Michael Nielsen’s talk where he describes the principles of quantum computing (video / post)

The Crisis of Everyday Life

Over at The Speculist, Phil Bowermaster has fired a volley across our bow. His post contains a few misrepresentations of The New Atlantis and our contributors. However, we think our body of work speaks for itself, and so rather than focusing on Mr. Bowermaster’s sarcastic remarks, I’d like to comment on the larger substantial point in his post. In covering a talk at the Singularity Summit last weekend, I wrote the following:

[David] Rose says the FDA is regulating health, but he says “everyone in this room is going to hell in a handbasket, not because of one or two genetic diseases,” but because we’re getting uniformly worse through aging. And that, he says, is what they’re trying to stop. Scattered but voracious applause and cheering. It’s that same phenomenon again — this weird rally attitude of yeah, you tell ’em! Who is it that they think they’re sticking it to? Or what?

Bowermaster responds, “Gosh, I can’t imagine,” and contends that my question arises from the fact that “the New Atlantis gang … ha[s] a difficult time even imagining that the positions they routinely take on issues — being manifestly and self-evidently correct — could be seriously opposed by anyone, much less in a vocal and enthusiastic way.” He adds that my question appeared to be one of “genuine puzzlement.”
In the haste of blogging in real time, I may have failed to make clear that my question wasn’t expressing “genuine puzzlement,” but was rhetorical. But now, with the leisure to spell out my concerns more fully, I’d like to expand on the point I was trying to make — and thereby to address Mr. Bowermaster’s post.
The combative rhetoric of transhumanists
I posed my question — Who is it that they think they’re sticking it to? — not just in response to the specific scene I had just described, but because of the pervasive rally-like attitude at the conference. That sense of sticking it to an unnamed opponent was part of the way many presenters spoke. Their statements — however technical, mundane, or uncontroversial — were often phrased as jabs instead of simple declarations. They spoke as in defiance — but of adversaries who were not named, not present, and may not have even existed. (The worst example of this was in the stage appearances by Eliezer Yudkowsky, as I noted here and here. Official videos of the conference are not yet available, but the point will quickly become evident in any video of his talks you can find online.)
This combative tendency demands examination because it is so typical of transhumanist rhetoric in general. To take just one egregious example, consider this excerpt from a piece in H+ Magazine entitled “The Meaning of Life Lies in Its Suckiness.” This piece is more sarcastic and vulgar than most transhumanist writings, but its combativeness and resentment are fairly representative:

[Bill] McKibben will put on his tombstone: “I’m dead. Nyah-nyah-nyah. Have a nice eternal enhanced life, transhumanist suckers.” Ray Kurzeill [sic] will be sitting there with his nanotechnologically enhanced penis and wikipedia brain feeling like a chump. Whose life has meaning now, bitches? That’s right, the dead guy.

The combativeness of transhumanist rhetoric might be more justifiable if it emerged chiefly in arguments with critics dubious of the transhumanist project to remake humanity (or to “save the world,” or whatever the preferred rendering). But their combativeness extends far beyond direct responses to their critics. It is rather a fundamental aspect of their stance toward the world.
Take, for instance, the discussion I was blogging about in the first place. A member of the audience asked whether the FDA should revisit its definition of health; the speaker’s rally-like attitude (and the audience’s corresponding response) could not have been directed at anybody in particular, for the FDA has nothing to do with what either the questioner or the speaker were talking about. Both the question and answer were detached from reality, but the speaker acted as if the FDA were really shafting the American people, and he nursed the audience’s sense of grievance at their perceived loss.
The fault, dear Brutus…
Against whom, then, is their grievance directed? Or — as I suggested in my initial post — against what is it directed? The ultimate target of the unhappy conferencegoers’ ire was not the FDA. Nor does the H+ Magazine author I quoted above have much of a case against Bill McKibben. Rather, the grievance of the transhumanists is against human nature and all of its limitations. As my co-blogger Charles T. Rubin wrote of prominent transhumanists Hans Moravec and Ray Kurzweil, they “share a deep resentment of the human body: both the ills of fragile and failing flesh, and the limitations inherent to bodily life, including the inability to fulfill our own bodily desires.”
Despite tremendous advances in our health, longevity, and prosperity, man’s given nature keeps us in bondage — and the sense of urgency in the effort to slip loose those bonds paradoxically grows as we comprehend ever greater means of doing so.
Transhumanism’s combative stance derives from this sense of constant urgency — what Yuval Levin has dubbed “the crisis of everyday life.” The main target of the combativeness, then, is man’s limited nature; the transhumanists are warring against what they themselves are. Any anger directed at critics like Bill McKibben or the FDA is rather incidental.
The transhumanists’ stance might become more clear — or at least more honest — if they acknowledged that their resentment is more directed at their own human nature than at any particular humans. But to do so might imperil their position. For they might realize — if the history of which they are exemplary is any guide — that as their power grows, their resentment at the remaining limits will only deepen, and will increase their hunger for ever more power to chase those limits away.
If their power did allow them to vanquish the last of their limitations — if “man’s estate,” to borrow Francis Bacon’s phrase, were fully relieved — to what purposes would these posthumans then turn their power? What purpose would they find in their existence when the central reason they have now for living was at last fulfilled? Through what struggle would they flourish when their struggle against struggle itself was complete?

The Revolution Will Be PowerPointed

A panel discussion during the 2009 Singularity Summit in New York City.

The 2009 Singularity Summit wrapped up in New York City yesterday. The whole thing was something of a blur — two days of back-to-back talks, milling about with conferencegoers, and frenzied posting.

As you can see here, the attendees were predominantly male, and almost exclusively nerds of various flavors: long-haired, disheveled programmers; smoothly dressed, New-Age types looking for transcendence but not immune to the need to constantly check their iPhones; jargon-slinging, bespectacled academics; and gel-haired, polo-shirt-wearing, young social entrepreneurs. (Pop Sci shows a similar sampling.) Basically, the conference felt like being back in my college computer science department.
Everyone I met was quite inquisitive and friendly. There was an excitement in the air, a sense of being in the presence of great people working together towards a great cause (about which, more in a moment).
The content of the conference itself, however, was rather underwhelming. Most of the talks were highly technical but too short and delivered too rapidly to convey much substance in a way that would last. Only a few of the speakers gave presentations both insightful and clear enough to be truly informative or persuasive. (For my money, the best talks were those by David Chalmers and Peter Thiel, and the discussion with Stephen Wolfram.)
The conference also lacked an overarching message. Certainly a diversity of opinion and interests in such a conference is inevitable, even good. But the problem was that the presenters treated it like a scientific or technical conference (indeed, some of the presentations seemed to have been written for technical conferences, with only a coda tacked on to justify their relevance to this one) when in fact the Singularity, transhumanism, and the related subjects that attracted the audience this weekend are not, strictly speaking, scientific subjects.
To put it another way, while its means may be technical and scientific, the ends of Singularitarianism, as disparate and even incoherent as they may be, are rather like those of a spiritual movement. I kept waiting for the presenters to make grand statements about the moral imperatives of the movement and about the awe-inspiring new things we will do and be. There were a few, but those larger ideas were mostly taken for granted. I thought, in particular, that we might get some of these first principles from Anna Salamon, who gave the opening and closing talks, or from Ray Kurzweil, who presides as the de facto spiritual leader (and head coach) of the movement.
But for a movement that aspires to such revolutionary things, the summit was in fact rather conventional: dry talks, PowerPoint slides, and lectures in rapid succession. (I should note that the organizers kept the whole thing impeccably on schedule, except for allowing Kurzweil to go well over his time at the end of the first day.) It seemed that many of the attendees were most excited during the breaks between presentations. They huddled around the superstar presenters. I heard more than a few conferencegoers ask each other, “Have you seen Ray? Where is he? I want to talk to him.” Many were excited just to be in the presence of fellow-travelers (since, as some of them told me, many of the attendees only knew of the Singularitarian movement through the Internet).
And this was where the organizers oddly seemed both to understand why people were really there and to fail to structure the event to reflect that. The proceedings rang of celebrity worship. The M.C. revved up the excitement before the big-name speakers. The final panel discussion was, unfortunately, about nothing substantive, just a sort of “behind the scenes with the boys of the Singularity,” an interview focusing on personalities instead of ideas. And Kurzweil didn’t deign to give a coherent presentation. For the first day, he literally came up on stage with a pad of paper and offered his ad hoc thoughts and pronouncements on the previous speakers. On the second day, he gave what one Twitterer described as his “stump speech” — a laundry list of responses to critics, mostly taken verbatim from his book on the Singularity. His talks just seemed to serve the purpose of assuring the crowd that the coach was still in control of the game and there was no need to worry (as another blogger has suggested).
But my impression was that there wasn’t nearly enough discussion and interaction to really suit most conferencegoers (myself included). And I heard attendees again and again expressing their wish to interact more with the presenters, and many expressing frustration at not having been able to ask questions.
I don’t really fault the organizers for this. Putting together a large conference is a demanding task, and this one was impressively smooth in its operation. Perhaps on some level it made sense to stick to the tried-and-true format of a professional, academic, or scientific conference. But that’s the problem: this is not a business, it is not an academic discipline, and it is not a science. It is a movement, one with goals it seeks to accomplish. I have the sense that the attendees were interested less in simply hearing facts — many of which are better conveyed in print and online anyway — than in discussing what it is they are all engaged in. Perhaps in the future, these conferences might be run more like seminars instead of lectures, or might find other ways of incorporating give-and-take conversations.
Many of the conferencegoers want humanity to become more virtual, with our frail bodies supplanted and our minds uploaded. To apply that logic, perhaps future conferences will move wholly online to avoid the logistical constraints of meeting in the physical world. But for this year, at least, the attendees seemed largely to take satisfaction in physicality: in encountering their leaders, in being in the presence of others who agree with them, and just in chatting over coffee with the fellow members of their movement.

Scenes from the Singularity Summit

Here are a few images from this past weekend’s Singularity Summit, now that it has drawn to a close.

Here’s Aubrey de Grey autographing a book for a conferencegoer:

And de Grey wasn’t the only fellow sporting such ample facial hair; here’s an attendee:

These beard photos might make you wonder about the demographics of the conference. Judge for yourself:

Not that there were no women’s faces to be seen. The presentation by Juergen Schmidhuber gave us a big one:

There was only one woman presenter at the conference, Anna Salamon — although she got to speak both first and last. Here she is chatting with a conferencegoer.

And next, a shot of a woman attendee: Ilana Pregen, asking the question that Brad Templeton so brusquely dismissed.

Stay tuned for more conference wrap-up today.

“How much it matters to know what matters”

Anna Salamon, the first speaker at the 2009 Singularity Summit, is also the last: “How much it matters to know what matters: A back-of-the-envelope calculation.” (Abstract and bio here.)

Salamon starts off with highlighting the apparently stupid reasons people do what they do — habit, culture, etc. — and says they could achieve goals much more efficiently with a little strategic thinking. Humans tend to act from roles, she says, not goals. For example, people spend four years in medical school because they find the role of doctor important, rather than doing basic comparative research on salaries. Apparently roles cannot be goals. (Hmm, I wonder why Salamon does things like speak at conferences? Purely because it was the course of action that maximized her finances?)
Anna Salamon at the 2009 Singularity SummitSalamon continues to lament the way people don’t think strategically when making decisions. She’s extolling the virtues of writing down estimates and using those to make goals. This is a strangely long wind-up, going on and on about why making back-of-the-envelope calculations is good. (Does Salamon think she invented utilitarianism?)
Okay, now she’s finally going for it: Her back-of-the-envelope calculations of the aggregate value and risk from A.I research. The risk from A.I., she says, is 7 percent. I guess she means a 7 percent chance of the world ending. The number of lives affected: about 7 billion. She breezes through more calculations, and manages to come up with some dollar amount of increased value through life. (Such estimates always have a touch of the absurd about them, no matter the context; here they seem especially silly.)
She breezes through the rest of the talk, too. Her conclusion is that we should think “damn hard” about the benefits and risks of the Singularity. And we should fund A.I. research and the Singularity Institute. A very underwhelming end to the summit, and quite an anticlimax after the previous panel.
And that’s it for the conference. I’ll have a final wrap-up later tonight (or possibly tomorrow), and will be going back and inserting a few more pictures into some of the earlier posts. Check back soon, and stay tuned, as the coverage we’ve been doing here marks just the beginning of our discussions here on “Futurisms.”

On persuasion and saving the world

The penultimate item on the agenda of the 2009 Singularity Summit is a panel discussion, on no particular topic, involving Aubrey de GreyEliezer Yudkowsky, and Peter Thiel. The moderator is Michael Vassar of the Singularity Institute. And it is in that order, from left to right, that the four men appear in this picture:
From left: Aubrey de Grey, Eliezer Yudkowsky, Peter Thiel, and Michael Vassar.
Vassar starts with a question about when each of the panelists realized they wanted to change the world. Thiel says he knew when he was young he wanted to be an entrepreneur, and once he found out about the Singularity, it was just natural to get on board with it and “save the world.”
Yudkowsky says, “Once I realized there was a problem, it never occurred to me not to save the world,” with a shrug and arms in the air. (Very scattered laughter and applause. The audience seems uncomfortable with him. I am, anyway. As I noted earlier, everything the guy says seems to drip with condescension, even in this room filled with people overwhelmingly on his side. He keeps having to invent straw men to put down as he talks.)
De Grey says he knows exactly when he realized he wanted to make a difference. It was when he was young and wanted to be a great pianist, but then realized that he’d spend all this time practicing — and then what? He’d just be another pianist and there are tons of those. So he decided he wanted to change a world. Then later he discovered no one was looking at stopping aging and he was horrified, so he decided to do that.
The moderator asks what each man would be working on if not the Singularity. De Grey says other existential risks besides aging. Yudkowsky says studying human rationality. (If only he would. A Twitterer seems to share my sentiments.) But he says it’s not about doing what you’re good at or want to do, but what you need to do. Thiel would be studying competition. Competition can be extremely good, he says, but can go way too far, and crush people. He says it was better for him as a youth that computers got better than chess, because he realized he shouldn’t be stressing himself so much over being a super-achieving chess player.
They get into talking about achievement a bit more later, and Thiel says he thinks it’s really important for people to have ways to persevere that aren’t necessarily about public success.
De Grey highlights the importance of “embarrassing people” to make them realize how wrong they are. We’re all aware of some of the things people say in defense of aging, he says. Thiel says his own personal bias is that that’s not a good approach, because there are so many different ways of looking at things, people have so many different cultural and value systems, and there may be deep-seated reasons they believe what they do. He says he likes to try hard to explain his points to people.
The rest of the discussion is not especially noteworthy. A bit of celebrity worship and ego stroking. Peter Thiel easily takes the cake for charm on this stage.

Rationalism, risk, and the purpose of politics

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Eliezer Yudkowsky at the 2009 Singularity Summit in New York CityEliezer Yudkowsky, a founder of the Singularity Institute (organizer of this conference), is up next with his talk, “Cognitive Biases and Giant Risks.” (Abstract and bio.)
He starts off by talking about how stupid people are. Or, more specifically, how irrational they are. Yudkowsky runs through lots of common logical fallacies. He highlights the “Conjunction Fallacy,” where people find a story more plausible when it includes more details, when in fact a story becomes less probable when it has more details. I find this to be a ridiculous example. Plausible does not mean probable; people are just more willing to believe something happened when they are told that there are reasons that it happened, because they understand that effects have causes. That’s very rational. (The Wikipedia explanation, linked above, has a different explanation than Yudkowsky’s that makes a lot more sense.)
Yudkowsky is running through more and more of these examples. (Putting aside the content of his talk for a moment, he comes across as unnecessarily condescending. Something I’ve seen a bit of here — the “yeah, take that!” attitude — but he’s got it much more than anyone else.)
He’s bringing it back now to risk analysis. People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.
This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).
[One of Yudkowsky’s slides.]
Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.
That seems at first to be an important point: We don’t plan on a large scale nearly as well or as rationally as we might. But just off the top of my head, Yudkowsky’s approach raises three problems. First, we do not all agree on what existential threats are; that is what politics and persuasion are for; there is no set of problems that everyone thinks we should spend money on; scientists and technocrats cannot answer these questions for us since they inherently involve values that are beyond the reach of mere rationality. Second, Yudkowsky’s depiction of humans, and of human society, as irrational and stupid is far too simplistic. And third, what’s so wrong with spending money on football? If we spent all our money on forestalling existential threats, we would lose sight of life itself, and of what we live for.
Thus ends his talk. The moderator notes that video of all the talks will be available online after the conference; we’ll post links when they’re up.

Methuselah speaks

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Aubrey de GreyThe conference’s last batch of talks is now underway, leading off with one of the Singularity movement’s most colorful characters, Aubrey de Grey, and is titled “The Singularity and the Methuselarity: Similarities and Differences.” (Abstract and bio.) De Grey has a stuffy British accent, long hair, and a beard down to his mid-chest. (I imagine this is meant to point to longevity in some way or another, though how precisely is difficult to discern. Is he showcasing how long he’s been alive? Or maybe trying to get us thinking about longevity by looking older than his forty-six years?)

De Grey is running through the standard gamut of life-extension medical technology. Gerontology, he says, is becoming an increasingly difficult and pointless pursuit as it attempts to treat the inevitable damage of old age. But if we reverse the damage, he says, we might be able to extend our biological age at a rate approaching the pace of time.
He goes through more math than is really necessary for us to get the concept that we can increase the rate at which we’re slowing aging. He mentions the concept of the Longevity Escape Velocity (LEV), which is the rate at which rejuvenation therapies must improve in order to stay one step ahead of aging. De Grey offers a somewhat-awkward neologism: the point at which we reach LEV, he says, is the “Methuselarity.” This is when we’re not quite immortal but we’re battling aging fast enough to be effectively immortal. (I have in mind an image of a cartoon character sprinting across a river and laying down the planks of a bridge in front of him as he goes.)
De Gray claims that we double our therapy rate every forty-two years, and that this is more than good enough if it’s kept up to reach LEV. Also, he notes, LEV decreases as our rejuvenation powers get better and better. He’s building a case here for maintenance technologies, like the massive cocktails of supplements and drugs that Kurzweil takes in hopes of slowing their aging.*
There are some interesting implications of his calculations. One of them, he notes, is that once we increase average longevity past the current maximum (about 120 years), the hardest part is over (since LEV will steadily decrease). This means that, he says, the first thousand-year-old will probably be not much more than twenty years older than the first 150-year-old. And the first million-year-old will probably only be a couple years older than the first thousand-year-old.
De Grey concludes by pointing out a tension between his project and the goals of some of the others in the room: He claims that after the Methuselarity, there will be no need to be uploaded. “Squishy stuff will be fine.” He notes, however, that this may significantly increase our risk aversion.
A questioner asks about his personal stake in the Singularity. De Grey says he’s not selfish because all of this travelling takes a toll on his health and longevity, and his work benefits others much more than himself (presumably, in an aggregate utilitarian sense that their combined increase in longevity outweighs his).
De Grey really breezed through that talk. The audience and the Twittersphere seemed to love it, though.
[One of de Grey’s slides.]
[* As originally written, this post stated that Aubrey de Grey is on a diet-supplement regime similar to the one Ray Kurzweil is on. Upon examination, we have no reason to think that is true; in fact, this interview seems to suggest that it is not. We have amended the text and apologize for the confusion. -ed.]

Investing in the Singularity?

[Continuing coverage of the 2009 Singularity Summit in New York City.]
The last talk before the final break of the conference is on venture capitalism, moderated by CNBC’s Robert Pisani, and including Peter Thiel, David Rose, and Mark Gorenberg.
Thiel mentions that many companies take a very long time to become profitable. He says that the first five or six investors in FedEx lost money, but it was the seventh who made a lot. So, he says, he likes to invest in companies that expect to lose money for a long time. They tend to be undervalued.

[From left: Peter Thiel, Mark Gorenberg, David S. Rose, and moderator Bob Pisani]
The mod asks how venture capitalists deal with the Singularity in making their decisions. One of the panelists responds that they’re all bullish about technology, echoing Thiel: if technology does not advance, they’re all screwed. But it sounds like he’s effectively saying that they keep it in mind but it doesn’t really impact investing. He doesn’t really look farther out than ten years. Thiel says he does think that there are some impacts — among other things, it’s a good time to invest in biotech. (“Yes!” says the woman next to me, in a duh voice.)
A questioner asks about why none of the panelists have mentioned investing in A.I. The guy has a very annoyed tone, as he did when he asked a question in Thiel’s talk. Thiel doesn’t seem enthused:
Peter Thiel
But another panelist says yes, good, let’s invest more in high-tech companies! Rapturous applause.

Peter Thiel on the Singularity and economic growth

[Continuing coverage of the 2009 Singularity Summit in New York City.]
Peter Thiel is a billionaire, known for cofounding PayPal and for his early involvement in Facebook. He also may be the largest benefactor of the Singularity Summit and longevity-related research. His talk today is on “Macroeconomics and Singularity.” (Abstract and bio.)
Thiel begins by outlining common concerns about the Singularity, and then asks the members of this friendly audience to raise their hands to indicate which they are worried about:

1. Robots kill humans (Skynet scenario). Maybe 10% raise their hands.

2. Runaway biotech scenario. 30% raise hands.

3. The “gray goo scenario.” 5% raise hands.

4. War in the Middle East, augmented by new technology. 20% raise hands.

5. Totalitarian state using technology to oppress people. 15% raise hands.

6. Global warming. 10% raise hands. (Interesting divergence again between transhumanism and environmentalism.)

7. Singularity takes too long to happen. 30% raise hands — and there is much laughter and applause.

Thiel says that, although it is rarely talked about, perhaps the most dangerous scenario is that the Singularity takes too long to happen. He notes that several decades ago, people expected American real wages to skyrocket and the amount of time working to decrease. Americans were supposed to be rich and bored. (Indeed, Thiel doesn’t mention it, but the very first issue of The Public Interest, back in 1965, included essays that worried about this precise concern, under the heading “The Great Automation Question.”) But it didn’t happen — real wages have stayed the same since 1973 and Americans work many more hours per year than they used to.
Thiel says we should understand the recent economic problems not as a housing crisis or credit crisis but rather as a technology crisis. All forms of credit involve claims on the future. Credit works, he says, if you have a background of growth — if everything grows every year, you won’t have a credit crisis. But a credit crisis means that claims for the future can’t be matched.
He says that if we want to keep society stable, we have to keep growing, or else we can’t support all of the projected growth that we’ve currently leveraged. Global stability, he says, depends on a “Good Singularity.”
In essence, we have to keep growing because we’ve already bet on the promise that we’ll grow. (I tried this argument in a poker game once for why a pair of threes should trump a flush — I already allocated my winnings for this game to pay next month’s rent! — but it didn’t take.)
Thiel’s talk is over halfway into his forty-minute slot. He is an engaging speaker with a fascinating thesis. The questioners are lining up quickly — far more lined up than for any other speaker so far, including Kurzweil.
In response to the first question about the current recession, Thiel predicts there will be no more bubbles in the next twenty years; either it will boom continuously or stay bust, but people are too aware now, and the cycle pattern has been broken. The next questioner asks about regulation and government involvement — should all this innovation happen in the private sector, or should the government fund it? Thiel says that the government isn’t anywhere near focused enough on science and technology right now, and he doesn’t think it has any role to play in innovation.
Peter Thiel
Another questioner asks about Francis Fukuyama’s book, Our Posthuman Future, in which he argues that once we create superhumans, there will be a superhuman/human divide. (Fukuyama has also called transhumanism one of the greatest threats to the welfare of humanity.) Thiel says it’s implausible — technology filters down, just like cell phones. He says that it’s a non-argument and that Fukuyama is hysterical, to rapturous applause from the audience.
After standing in line, holding my laptop with one hand and blogging with another, I take the stand and ask Thiel about the limits of his projection: if we’re constantly leveraging against the future, what happens when growth reaches its limits? Will we hit some sort of catastrophic collapse? He says that we may reach some point in the future where we have, basically, a repeat of what we had over the last two years, when we can’t meet growth and we have another collapse. So are there no limits to growth, I ask? He says if we hit other road bumps we’ll have to just deal with it then. I try again, but the audience becomes restless and Thiel essentially repeats his point, so I go sit down.
What I should have asked was: Why is it so crucial to speed up innovation if catastrophic collapse is seemingly inevitable, whether it happens now or later?