Tav’s Mistake

Neal Stephenson’s Seveneves is a typical Neal Stephenson novel: expansive and nearly constantly geeking out over something. If a character in one of Stephenson’s SF novels is about to get into a spacesuit, you know that’ll take five pages because Stephenson will want to tell you about every single element of the suit’s construction. If a spacecraft needs to rendezvous with a comet, and must get from one orbital plane to another, Stephenson will need to explain every decision and the math underlying it, even if that takes fifty pages — or more. If you like that kind of thing, Seveneves will be the kind of thing you like.

I don’t want to write a review of the novel here, beyond what I’ve just said; instead, I want to call attention to one passage. Setting some of the context for it is going to take a moment, though, so bear with me. (If you want more details, here’s a good review.)

The novel begins with this sentence: “The moon blew up without warning and for no apparent reason.” After the moon breaks into fragments, and the fragments start bumping into each other and breaking into ever smaller fragments, scientists on earth figure out that at a certain point those fragments will become a vast cloud (the White Sky) and then, a day or two later, will fall in flames to earth — so many, and with such devastating force, that the whole earth will become uninhabitable: all living things will die. This event gets named the Hard Rain, and it will continue for millennia. Humanity has only two years to prepare for this event: this involves sending a few people from all the world’s nations up to the International Space Station, which is frantically being expanded to house them. Also sent up is a kind of library of genetic material, in the hope that the diversity of the human race can be replicated at some point in the distant future.

The residents of the ISS become the reality-TV stars for those on earth doomed to die: every Facebook post and tweet scrutinized, every conversation (even the most private) recorded and played back endlessly. Only a handful of these people survive, and as the Hard Rain continues on a devastated earth, their descendants very slowly rebuild civilization — focusing all of their intellectual resources on the vast problems of engineering with which they’re faced as a consequence of the deeply unnatural condition of living in space. This means that, thousands of years after the Hard Rain begins, as they are living in an environment of astonishing technological complexity, they don’t have much in the way of social media.

In the decades before Zero [the day the moon broke apart], the Old Earthers had focused their intelligence on the small and the soft, not the big and the hard, and built a civilization that was puny and crumbling where physical infrastructure was concerned, but astonishingly sophisticated when it came to networked communications and software. The density with which they’d been able to pack transistors onto chips still had not been matched by any fabrication plant now in existence. Their devices could hold more data than anything you could buy today. Their ability to communicate through all sorts of wireless schemes was only now being matched — and that only in densely populated, affluent places like the Great Chain.

But in the intervening centuries, those early textual and visual and aural records of the survivors had been recovered and turned into The Epic — the space-dwelling humans’ equivalent of the Mahabharata, a kind of constant background to the culture, something known to everyone. And when the expanding human culture divides into two distinct groups, the Red and the Blue, the second of those groups became especially attentive to one of those pioneers, a jounalist named Tavistock Prowse. “Blue, for its part, had made a conscious decision not to repeat what was known as Tav’s Mistake.”

Fair or not, Tavistock Prowse would forever be saddled with blame for having allowed his use of high-frequency social media tools to get the better of his higher faculties. The actions that he had taken at the beginning of the White Sky, when he had fired off a scathing blog post about the loss of the Human Genetic Archive, and his highly critical and alarmist coverage of the Ymir expedition, had been analyzed to death by subsequent historians. Tav had not realized, or perhaps hadn’t considered the implications of the fact, that while writing those blog posts he was being watched and recorded from three different camera angles. This had later made it possible for historians to graph his blink rate, track the wanderings of his eyes around the screen of his laptop, look over his shoulder at the windows that had been open on his screen while he was blogging, and draw up pie charts showing how he had divided his time between playing games, texting friends, browsing Spacebook, watching pornography, eating, drinking, and actually writing his blog. The statistics tended not to paint a very flattering picture. The fact that the blog posts in question had (according to further such analyses) played a seminal role in the Break, and the departure of the Swarm, only focused more obloquy upon the poor man.

But — and this is key to Stephenson’s shrewd point — Tav is a pretty average guy, in the context of the social-media world all of us inhabit:

Anyone who bothered to learn the history of the developed world in the years just before Zero understood perfectly well that Tavistock Prowse had been squarely in the middle of the normal range, as far as his social media habits and attention span had been concerned. But nevertheless, Blues called it Tav’s Mistake. They didn’t want to make it again. Any efforts made by modern consumer-goods manufacturers to produce the kinds of devices and apps that had disordered the brain of Tav were met with the same instinctive pushback as Victorian clergy might have directed against the inventor of a masturbation machine.

So the priorities of the space-dwelling humanity are established first by sheer necessity: when you’re trying to create and maintain the technologies necessary to keep people alive in space there’s no time for working on social apps. But it’s in light of that experience that the Spacers grow incredulous at a society that lets its infrastructure deteriorate and its medical research go underfunded in order to devote its resources of energy, attention, technological innovation, and money to Snapchat, YikYak, and Tinder.

Stephenson has been talking about this for a while now. He calls it “Innovation Starvation”:

My life span encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. In the summer of 2011, at the age of fifty-one — not even old — I watched on a flatscreen as the last space shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where’s my donut-shaped space station? Where’s my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled.

Still, I worry that our inability to match the achievements of the 1960s space program might be symptomatic of a general failure of our society to get big things done. My parents and grandparents witnessed the creation of the automobile, the airplane, nuclear energy, and the computer, to name only a few. Scientists and engineers who came of age during the first half of the twentieth century could look forward to building things that would solve age-old problems, transform the landscape, build the economy, and provide jobs for the burgeoning middle class that was the basis for our stable democracy.

Now? Not so much.

I think Stephenson is talking about something very, very important here. And I want to suggest that the decision to focus on “the small and the soft” instead of “the big and the hard” creates a self-reinforcing momentum. So I’ll end here by quoting something I wrote about this a few months ago:

Self-soothing by Device. I suspect that few will think that addiction to distractive devices could even possibly be related to a cultural lack of ambition, but I genuinely think it’s significant. Truly difficult scientific and technological challenges are almost always surmounted by obsessive people — people who are grabbed by a question that won’t let them go. Such an experience is not comfortable, not pleasant; but it is essential to the perseverance without which no Big Question is ever answered. To judge by the autobiographical accounts of scientific and technological geniuses, there is a real sense in which those Questions force themselves on the people who stand a chance of answering them. But if it is always trivially easy to set the question aside — thanks to a device that you carry with you everywhere you go — can the Question make itself sufficiently present to you that answering is becomes something essential to your well-being? I doubt it.

How to Solve the Future

Google has set up a new program called Solve for X. In the clear and concise words of the site, Solve for X

is a place to hear and discuss radical technology ideas for solving global problems. Radical in the sense that the solutions could help billions of people. Radical in the sense that the audaciousness of the proposals makes them sound like science fiction. And radical in the sense that there is some real technology breakthrough on the horizon to give us all hope that these ideas could really be brought to life.

The site already has posted a number of videos that are forays into the “moonshot” thinking the program hopes to encourage, including one typically intelligent and provocative talk by author Neal Stephenson.Those of us who follow the world of transhumanism may be a bit surprised to find that anyone thinks there is a lack of audacious and radical thinking about the human future in the world today. Stephenson is a bit more cautious in his talk, arguing instead that at the moment there seems to be a lack of effort to do big things, contrasting unfavorably the period from around 1968 to the present with the extraordinary transformations of human thinking and abilities that took place between 1900 (the dawn of aviation) and the Moon landing.(It’s not quite clear why Stephenson picks 1968 as the dividing year, instead of the year of the first moon landing (1969), or the last (1972). Perhaps it makes sense if you consider that the point at which it was clear we were going to beat the Russians to the Moon was the point at which enthusiasm for efforts beyond that largely evaporated among the people who held the purse strings — meaning American lawmakers as well as the public.)—At any rate, Stephenson attributes at least some of that lack of effort to a paucity of imagination. He thus calls for deliberate efforts by science fiction writers to cooperate with technically minded people in writing what could be inspiring visions of the future for the rising generation.There is a good deal that might be said about his argument, and perhaps I will write more about in later posts. For the moment, I would just like to note that, even accepting his premise about the paucity of big thinking and big effort today, Stephenson’s prescription for remedying it is odd, considering his own accomplishments. It’s not as if the nanotechnology world of his brilliant novel The Diamond Age: Or, a Young Lady’s Illustrated Primer is an uninspiring dead letter.The same of course goes for many of the futuristic promises of classic science fiction, but in Diamond Age, Stephenson presented his science fiction world with an unusual moral realism that one might have thought would make it all the more inspiring to all but the most simplistically inclined. Perhaps it is modesty that prevented him from putting forward his own existing work as a model.—Yet by ignoring what he achieved in Diamond Age, Stephenson also overlooks another way of looking at the problem he sets up in the achievement gap between 1900–1968 and 1968–now. For the book is premised in part on the belief that history exhibits pendulum swings. Should we really be surprised if a time of revolution is followed by a period of reaction and/or consolidation?Believers in the Singularity would, of course, be surprised if this were the case. But they are attempting to suggest the existence of a technological determinism that Stephenson wisely avoided in Diamond Age. But he was swimming against the tide; it is striking just how much of the science fiction of the first two-thirds of the twentieth century was driven by a sense that the future would be molded by some kind of necessity, often catastrophic.For example, overpopulation would force huge urban conglomerations on us, or would be the driver for space colonization. Or the increasing violence of modern warfare would be the occasion for rebuilding the world physically or politically or both.Perhaps we are living in a time of (relative) pause because the realization is dawning that we are not in the grip of historical forces beyond our control. It would take some time to absorb that sobering possibility. It is not too early to attend to the lesson drawn so well in Diamond Age: that at some point the question of what should be done becomes more important than the question of what can be done.

learning Greek

Sure, people think it’s a good idea to learn Greek. But of course they would when you put the question that way. It’s a good idea to learn all sorts of things. The problems come when you try to determine relative goods. Is learning ancient Greek more valuable than learning calculus?For the last few years I’ve made a conscious decision to work on retrieving some of my lost math and science knowledge, primarily in order to facilitate my understanding and use of computers. But this has been a fairly rough road, in part because of many years of exercising my mind in other ways, but also because the computer science world is generally not friendly to beginners/newbies/noobs. Even in some of the polite and friendly responses to my comment on this Snarkmarket post you can sense the attitude: “I just don’t have the time or energy to be introductory.”Maybe I’ve just been unlucky, but with a few notable exceptions, that’s how it has gone for me as I’ve tried to learn more about computing: variants on “RTFM.” I think this response to learners happens when people think that their own field is the wave of the future, and like the idea that they’re among the few who realize it — as Neal Stephenson once put it, they’re the relatively few Morlocks running the world for the many Eloi — and don’t especially need any more company in the engine room. Whereas classicists like Mary Beard are advocates for their fields because they are pained by their marginality. Maybe I should be pursuing Greek instead of Ruby on Rails. . . .But wait! Just as I wrote and queued up this post, I came across an interesting new endeavor: Digital Humanities Questions and Answers. Now this might restore one’s hopefulness!

normal science vs. chaos

When the wonderful literary critic Tony Tanner died twelve years ago, Colin McCabe wrote an obituary containing these lines:

The degree he undertook at Cambridge was largely the product of a union of I.A. Richards’s methods of practical criticism and F.R. Leavis’s historical moralism. Both for very different reasons situated English literature as the central discipline for a modern university: a discipline focused on close reading of the canon – the body of English literature from Chaucer to Eliot which recorded Arnold’s “best that had been thought and said”.

To read English at Cambridge in the late Fifties was to have the last opportunity to read the whole canon of English literature. The texts had been agreed for 30 years, the secondary literature was still modest and while history, sociology and anthropology could make contributions to the “central discipline of the modern university”, the questions posed by both theory and popular culture had yet to be articulated.

It’s hard not to hear this as a nostalgic narrative, though the nostalgia isn’t explicit. McCabe seems to sigh as he remembers the days when academic literary study knew — just knew — what it was doing.

Contrast this to the view expressed by Valentine Coverly, the devoted mathematician in Tom Stoppard’s Arcadia, as he tries to explain how chaos theory and fractal geometry are reshaping our understanding of the world:

The future is disorder. A door like this has cracked open five or six times since we got up on our hind legs. It’s the best possible time to be alive, when almost everything you thought you knew is wrong.

(Val’s preference for a time when “everything you thought you knew is wrong” may be shared even by people who don’t think that we’re living in such a time. Neal Stephenson’s fascination with the late seventeenth and early eighteenth centuries, in his Baroque Cycle, is clearly a function of his sense that that was really a time when existing intellectual worlds had been turned upside and nobody knew what was coming next.)

An interesting characterological divide: between those who prefer the periods of clear intellectual orientation — more or less what Thomas Kuhn called “normal science” — and those who like living in the midst of chaotic upheaval.

The Mongoliad

Now here’s a really interesting idea — from Neal Stephenson and others — for a book . . . or rather an app . . . or a service . . . sort of a wiki or a game . . . oh heck, just read it:

The Mongoliad stand out as a possible way forward for post-print publishing. PULP makes this book into something that’s truly the product of our collective imaginations. When you’re reading a chapter of the book, you always have the option to pull up a an interactive discussion window and leave a note or enter a discussion about the book. You can write your own additional storyline. Or add to the pedia to explain more about the historical setting. You can also rate every aspect of the book, rating any page on a scale of one to five stars.The Mongoliad isn’t just a story; it’s a platform for collaborative worldbuilding. The question is, how do you prevent such an endeavor from degenerating into chaos? “We have the concept of canonicity – if we like it we’ll tag it as canon,” Bornstein says. “We’ll have ways of reflecting people’s community standing in our forums. So some people will be able to help curate the canon, and we’ll be the ultimate arbiters.” The book will become a thicket of fanfic, but there will be a clear, canonical path marked through it by the creators of the story.So how will you buy The Mongoliad? It won’t be like getting a traditional ebook, which is usually wrapped in some kind of format or digital restriction management software designed to prevent people from sharing it. Subutai is devoted to selling this book without DRM. You’ll get it in an app store, and you’ll pay what Bornstein calls “a relatively low price” for it as a six-month service, where you get new content every week. At the end of those six months you can renew for “a lower price.” Bornstein hinted that the book will eventually contain “a few games too.”

I have to say, this sounds really interesting — except I don’t like the idea of having the experience locked in to the iPhone/Pod/Pad world. That wouldn’t be the venue I’d choose anyway.(Thanks to Adam Keiper for the link.)

all aboard the Axiom!

It’s been interesting — and somewhat disconcerting — to see the techno-ideology John Gruber has been selling since Apple announced the iPad. See this post, or this one, or the beginning of this one. Basically, Gruber is endorsing the Eloi-Morlock theory of computing experience according to which . . . well, why try to improve on, or even compete with, perfection? Let’s go to the source here, Neal Stephenson’s still-amazingly-brilliant essay from a decade ago, In the Beginning Was the Command Line. Take it away, Neal:

Contemporary culture is a two-tiered system, like the Morlocks and the Eloi in H.G. Wells’s The Time Machine, except that it’s been turned upside down. In The Time Machine the Eloi were an effete upper class, supported by lots of subterranean Morlocks who kept the technological wheels turning. But in our world it’s the other way round. The Morlocks are in the minority, and they are running the show, because they understand how everything works. The much more numerous Eloi learn everything they know from being steeped from birth in electronic media directed and controlled by book-reading Morlocks. So many ignorant people could be dangerous if they got pointed in the wrong direction, and so we’ve evolved a popular culture that is (a) almost unbelievably infectious and (b) neuters every person who gets infected by it, by rendering them unwilling to make judgments and incapable of taking stands.Morlocks, who have the energy and intelligence to comprehend details, go out and master complex subjects and produce Disney-like Sensorial Interfaces so that Eloi can get the gist without having to strain their minds or endure boredom. Those Morlocks will go to India and tediously explore a hundred ruins, then come home and built sanitary bug-free versions: highlight films, as it were. This costs a lot, because Morlocks insist on good coffee and first-class airline tickets, but that’s no problem because Eloi like to be dazzled and will gladly pay for it all.Now I realize that most of this probably sounds snide and bitter to the point of absurdity: your basic snotty intellectual throwing a tantrum about those unlettered philistines. As if I were a self-styled Moses, coming down from the mountain all alone, carrying the stone tablets bearing the Ten Commandments carved in immutable stone–the original command-line interface–and blowing his stack at the weak, unenlightened Hebrews worshipping images. Not only that, but it sounds like I’m pumping some sort of conspiracy theory.But that is not where I’m going with this. The situation I describe, here, could be bad, but doesn’t have to be bad and isn’t necessarily bad now:It simply is the case that we are way too busy, nowadays, to comprehend everything in detail. And it’s better to comprehend it dimly, through an interface, than not at all. . . . My own family–the people I know best–is divided about evenly between people who will probably read this essay and people who almost certainly won’t, and I can’t say for sure that one group is necessarily warmer, happier, or better-adjusted than the other.

I don’t know when to stop quoting this, so go read the whole thing. It’s great. My point is simply that those who complain about the increasingly closed architecture of the iPad, and the decline of the personal computer and its replacement by digital “appliances,” are resisting the dividing of the world into Eloi and Morlocks. Gruber is embracing it.My own view is that I was an Eloi for most of my life but have spent the last few years trying to learn how to be a Morlock, at least a minor-league Morlock — and that has been, I think, time well spent. I am concerned that too many people will simply accept their Eloi status, and will give up on trying to understand the technologies that are shaping their minds and experiences, and will end up in a condition more or less like that of the people on the Axiom in Wall•E. It’s rather ironic, to say the least, that the company that’s doing the most to push us in this direction is the other one driven to its highest standards of excellence by Steve Jobs.