Tav’s Mistake

Neal Stephenson’s Seveneves is a typical Neal Stephenson novel: expansive and nearly constantly geeking out over something. If a character in one of Stephenson’s SF novels is about to get into a spacesuit, you know that’ll take five pages because Stephenson will want to tell you about every single element of the suit’s construction. If a spacecraft needs to rendezvous with a comet, and must get from one orbital plane to another, Stephenson will need to explain every decision and the math underlying it, even if that takes fifty pages — or more. If you like that kind of thing, Seveneves will be the kind of thing you like.

I don’t want to write a review of the novel here, beyond what I’ve just said; instead, I want to call attention to one passage. Setting some of the context for it is going to take a moment, though, so bear with me. (If you want more details, here’s a good review.)

The novel begins with this sentence: “The moon blew up without warning and for no apparent reason.” After the moon breaks into fragments, and the fragments start bumping into each other and breaking into ever smaller fragments, scientists on earth figure out that at a certain point those fragments will become a vast cloud (the White Sky) and then, a day or two later, will fall in flames to earth — so many, and with such devastating force, that the whole earth will become uninhabitable: all living things will die. This event gets named the Hard Rain, and it will continue for millennia. Humanity has only two years to prepare for this event: this involves sending a few people from all the world’s nations up to the International Space Station, which is frantically being expanded to house them. Also sent up is a kind of library of genetic material, in the hope that the diversity of the human race can be replicated at some point in the distant future.

The residents of the ISS become the reality-TV stars for those on earth doomed to die: every Facebook post and tweet scrutinized, every conversation (even the most private) recorded and played back endlessly. Only a handful of these people survive, and as the Hard Rain continues on a devastated earth, their descendants very slowly rebuild civilization — focusing all of their intellectual resources on the vast problems of engineering with which they’re faced as a consequence of the deeply unnatural condition of living in space. This means that, thousands of years after the Hard Rain begins, as they are living in an environment of astonishing technological complexity, they don’t have much in the way of social media.

In the decades before Zero [the day the moon broke apart], the Old Earthers had focused their intelligence on the small and the soft, not the big and the hard, and built a civilization that was puny and crumbling where physical infrastructure was concerned, but astonishingly sophisticated when it came to networked communications and software. The density with which they’d been able to pack transistors onto chips still had not been matched by any fabrication plant now in existence. Their devices could hold more data than anything you could buy today. Their ability to communicate through all sorts of wireless schemes was only now being matched — and that only in densely populated, affluent places like the Great Chain.

But in the intervening centuries, those early textual and visual and aural records of the survivors had been recovered and turned into The Epic — the space-dwelling humans’ equivalent of the Mahabharata, a kind of constant background to the culture, something known to everyone. And when the expanding human culture divides into two distinct groups, the Red and the Blue, the second of those groups became especially attentive to one of those pioneers, a jounalist named Tavistock Prowse. “Blue, for its part, had made a conscious decision not to repeat what was known as Tav’s Mistake.”

Fair or not, Tavistock Prowse would forever be saddled with blame for having allowed his use of high-frequency social media tools to get the better of his higher faculties. The actions that he had taken at the beginning of the White Sky, when he had fired off a scathing blog post about the loss of the Human Genetic Archive, and his highly critical and alarmist coverage of the Ymir expedition, had been analyzed to death by subsequent historians. Tav had not realized, or perhaps hadn’t considered the implications of the fact, that while writing those blog posts he was being watched and recorded from three different camera angles. This had later made it possible for historians to graph his blink rate, track the wanderings of his eyes around the screen of his laptop, look over his shoulder at the windows that had been open on his screen while he was blogging, and draw up pie charts showing how he had divided his time between playing games, texting friends, browsing Spacebook, watching pornography, eating, drinking, and actually writing his blog. The statistics tended not to paint a very flattering picture. The fact that the blog posts in question had (according to further such analyses) played a seminal role in the Break, and the departure of the Swarm, only focused more obloquy upon the poor man.

But — and this is key to Stephenson’s shrewd point — Tav is a pretty average guy, in the context of the social-media world all of us inhabit:

Anyone who bothered to learn the history of the developed world in the years just before Zero understood perfectly well that Tavistock Prowse had been squarely in the middle of the normal range, as far as his social media habits and attention span had been concerned. But nevertheless, Blues called it Tav’s Mistake. They didn’t want to make it again. Any efforts made by modern consumer-goods manufacturers to produce the kinds of devices and apps that had disordered the brain of Tav were met with the same instinctive pushback as Victorian clergy might have directed against the inventor of a masturbation machine.

So the priorities of the space-dwelling humanity are established first by sheer necessity: when you’re trying to create and maintain the technologies necessary to keep people alive in space there’s no time for working on social apps. But it’s in light of that experience that the Spacers grow incredulous at a society that lets its infrastructure deteriorate and its medical research go underfunded in order to devote its resources of energy, attention, technological innovation, and money to Snapchat, YikYak, and Tinder.

Stephenson has been talking about this for a while now. He calls it “Innovation Starvation”:

My life span encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. In the summer of 2011, at the age of fifty-one — not even old — I watched on a flatscreen as the last space shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where’s my donut-shaped space station? Where’s my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled.

Still, I worry that our inability to match the achievements of the 1960s space program might be symptomatic of a general failure of our society to get big things done. My parents and grandparents witnessed the creation of the automobile, the airplane, nuclear energy, and the computer, to name only a few. Scientists and engineers who came of age during the first half of the twentieth century could look forward to building things that would solve age-old problems, transform the landscape, build the economy, and provide jobs for the burgeoning middle class that was the basis for our stable democracy.

Now? Not so much.

I think Stephenson is talking about something very, very important here. And I want to suggest that the decision to focus on “the small and the soft” instead of “the big and the hard” creates a self-reinforcing momentum. So I’ll end here by quoting something I wrote about this a few months ago:

Self-soothing by Device. I suspect that few will think that addiction to distractive devices could even possibly be related to a cultural lack of ambition, but I genuinely think it’s significant. Truly difficult scientific and technological challenges are almost always surmounted by obsessive people — people who are grabbed by a question that won’t let them go. Such an experience is not comfortable, not pleasant; but it is essential to the perseverance without which no Big Question is ever answered. To judge by the autobiographical accounts of scientific and technological geniuses, there is a real sense in which those Questions force themselves on the people who stand a chance of answering them. But if it is always trivially easy to set the question aside — thanks to a device that you carry with you everywhere you go — can the Question make itself sufficiently present to you that answering is becomes something essential to your well-being? I doubt it.

adventurousness and its enemies, part 2

It’s not just in writing that the social can militate against innovation: it happens in teaching too. Some administrators want teachers to be willing to tweak their assignments, their syllabi, and their use of class time on a weekly, or even daily, basis, in response to student feedback — and then simultaneously insist that they want teachers to be imaginative and innovative.

But these imperatives are inconsistent with one another, because students tend to be quite conservative in such matters; and the more academically successful they are, the more they will demand the familiar and become agitated by anything unfamiliar and therefore unpredictable. It is possible for a good teacher to manage this agitation, but it’s not easy, and it requires you to have the courage of your convictions.
You get this courage, I think, by being willing to persist in choices that make students uncomfortable. Now, some student discomfort results from pedagogical errors, but some of it is quite salutary; the problem is that you can’t usually tell the one from the other until the semester is over — and sometimes not even then. I have made more than my share of boneheaded mistakes in my teaching, but often, over the years, I have had students tell me, “I hated reading that book, but now that I look back on it I’m really glad that you made us read it.” Or, “That assignment terrified me because I had never done anything like it, but it turned out to be one of the best things I ever wrote.” But if I had been faced to confront, and respond to, and alter my syllabus in light of, in-term opposition to my assignments, I don’t know how many of them I would have persisted in. It would have been difficult, that’s for sure.
The belief that constant feedback in the midst of an intellectual project is always, or even usually, good neglects one of the central truths of the life of the mind: that the owl of Minerva flies only, or at least usually, at night.

adventurousness and its enemies

Yesterday I wrote that insofar as writing becomes social, it will become less, not more, adventurous. Here’s why: imagine that James Joyce drafts the first episode of Ulysses and posts it online. What sort of feedback will he receive, especially from people who had read his earlier work? Nothing very commendatory, I assure you. By the time he posts the notoriously impenetrable third episode, with its full immersion in the philosophical meditations of a neurotic hyperintellectual near-Jesuit atheist artist-manqué, the few readers who haven’t jumped ship already will surely be drawing out, and employing, their long knives. Then how will they handle the introduction of Leopold Bloom, and all the attention given to the inner life of this seemingly unremarkably and coarse-minded man? And, much later, the nightmare-fantasia in Nighttown? It doesn’t bear thinking of.

Would Joyce be able to resist the immense pressures from readers to give them something they recognize? Of course he would; he’s James Joyce. He doesn’t give a rip about their incomprehension. (Which is why he wouldn’t post drafts online in the first place, but never mind.) But how many other writers could maintain their commitment to experimentation and innovation amidst a cacophony of voices demanding the familiar? — which is, after all, what the great majority of voices always demand.

the relative value of innovation

Steven Johnson’s Where Good Ideas Come From is primarily about innovation — about the circumstances that favor innovation. Thus, for instance, his praise of cities, because cities enable people who are interested in something to have regular encounters with other people who are interested in the same thing. Proximity means stimulation, friction. Iron sharpens iron, as the Bible says.

All very true, and Johnson make his case well. But as I read and enjoyed the book, I sometimes found myself asking questions that Johnson doesn’t raise. This is not a criticism of his book — given his subject, he had no obligation to raise these questions — but just an indication of what can happen when you take a step back from a book’s core assumptions. So:

1) Almost all of the innovations Johnson describes are scientific and technological. How many of these are “good” not in the sense of being new and powerful, but in the sense of contributing to general human flourishing? That is, what percentage of genuine innovations would we be better off without?

2) A related question: Can a society be overly innovative? Is it possible to produce more new idea, discoveries, and technologies than we can healthily incorporate?

3) Under what circumstances does a given society need strategies of conservation and preservation more than it needs innovation?

4) Do the habits of mind (personal and social) that promote innovation consort harmoniously with those that promote conservation and preservation? Can a person, or a society, reconcile these two impulses, or will one dominate at the expense of the other?

Just wondering.

Then, Voyager

Voyager (which I mentioned in a previous post) was one of the coolest companies around in the Nineties; I was a devoted customer. I bought Voyager Expanded Books: The Hitchhiker’s Guide to the Galaxy, John McPhee’s Annals of the Former World (though it may not have had that title then). Books on floppy disk! Annotatable! Variable text sizing! — really, they were amazingly similar to Kindle books, except on my Mac. If I remember rightly, If Monks Had Macs was on floppy too, though at some point Voyager’s products shifted to CD-ROM. I believe the first CD-ROM I ever bought was Voyager’s edition of Art Spiegelman’s Maus: looking through its collection of period documents, commentary by Spiegelman, and taped interviews with his father, I felt that I had entered some brave new world. But trying to read the book on screen was annoying as hell (screens weren’t very large in those days). I bought a “tour of the Louvre,” some kind of “animals of the world” disc featuring a tiny movie with narration by James Earl Jones, and a collection of simply animated folk songs of the world. Only the last captured the attention of my son, then a toddler: he would sit on my lap for an hour watching and listening to the Kookaburra song and “Shalom Aleichem” and some haunting Swedish song that I can’t quite recall now. Good times, good times. Voyager was state of the art then — plus, most of their stuff was written in my beloved HyperCard — and I probably thought that they had identified the future of multimedia communications. What I didn't know, and probably what Voyager didn't know either, was that this nascent entity called the World Wide Web was about to change everything. It’s interesting, in light of subsequent history, to note that the one Voyager product line that has survived and thrived is the one that might have seemed least innovative at the time: the Criterion Collection of classic films.

why do I bother?

. . . writing a post about Google's plans to build their own OS, when I could have just waited for Fake Steve:

Point four: You also may not have noticed, but nobody uses Chrome. I mean think about it. Do you know anyone who uses Chrome? Really? And you know why nobody uses Chrome? Because Chrome is shit. Just utter, utter shit. I mean they've got all these big brains at Google and you'd think they could make a decent f***ing browser. Jesus, the morons at Mozilla can do it. But not Google. Nope. They gave it their big best effort and what did they come up with? Chrome. It's a joke. I mean, literally, we laugh about it, except when Eric is around. But as soon as he leaves the room we all go "Chrome!" and just burst out laughing. Our guys on the Safari team even had special toilet paper made up with a Chrome logo on every sheet. That's how bad it is. Trying to make an OS out of Chrome is like saying you're going to turn a Pontiac Aztek into a stretch limousine. I suppose it could be done, but why?

Google's OS future

There are already a great many blog posts on Google’s announcement of its operating-system-in-progress; probably the most interesting one I’ve seen so far is from John Timmer at Ars Technica. Sample:

From a technological perspective, there appear to be some interesting aspects to rethinking the operating system. For one, by having an extremely narrow focus—bringing up a networking stack and browser as quickly as possible—Chrome OS has the ability to cut down on the hassles related to restarting and hibernating computers. And, aside from the browser, all of the key applications will reside online, security and other software updates won't happen on the computer itself, which should also improve the user experience. . . . More cryptically, Google also says that the users it views as its target market "don't want to spend hours configuring their computers to work with every new piece of hardware." That problem has plagued all OS makers, and none of them have solved it to the satisfaction of all users. It's possible that Google thinks it can do so, but given its general attitude (everyone should be happy with Web apps), it's equally possible that the company has decided that people simply don't need much in the way of peripherals.

And then near the end:

Will all of this work? Apple spent a couple of years trying to convince developers that they should be happy with Web apps, but it's clear that the arrival of native applications has been a significant driver of the iPhone's popularity. Palm appears to be trying something closer to Google's vision with the Pre, but Palm is also offering a native SDK, and it's too early to tell how well its reliance on online services will work out for users. At this stage, it's not even clear if the netbook market will have staying power once the economy picks back up.

We’ve seen already the convenience of web apps — access to the same data from anywhere you have am internet connection, and “pushed” upgrades that “just happen” — and we’ve seen some of the problems: catastrophic data loss (e.g. the ma.gnolia disaster), privacy concerns, lack of offline access, the limited feature sets of web apps in comparison to their desktop counterparts. Google’s approach to these problems seems to be to reassure us about the first, hope that we ignore the second, fix the third, and hope that convenience trumps the fourth. My guess is that ultimately they will succeed in all these endeavors, at least for a great many consumers.

free as in threatening

Very likely most of you who are interested have already seen this stuff, but Chris Anderson’s new book Free: the Future of a Radical Price, has been getting some thoughtful attention, most notably from Malcolm Gladwell — but this review from Drake Bennett at the Boston Globe is interesting too. My short take on all this — I’m not sure whether I’ll find time to produce a longer one — is that Anderson’s critics seem to win on points, but then, I haven't read the book yet. I probably will, though, since it’s going to be, um, free. For a while anyway. But just one comment for now: Anderson’s response to Gladwell is titled “Dear Malcolm: Why so threatened?” and, you know, I hate that line. It’s one of the more common and more annoying forms of what C. S. Lewis called Bulverism. “You don't like my book, but since my book is obviously excellent” — see Alain de Botton — “you must be lying or malicious or suffering from some psychological shortcoming. Let’s see — I’d rather not think you are lying or malicious, so let me assume . . . yes! — let me assume that you are threatened by my unassailable arguments. It is weakness on your part, not malice, that makes you say these obviously false things.”

O'Reilly and the Wave

Tim O'Reilly has a post up today about Google Wave, the new project-in-development by Jens and Lars Rasmussen, the primary creators of Google Maps. According to O'Reilly, Lars describes the project in this way: "We set out to answer the question: What would email look like if we set out to invent it today?" O’Reilly continues,

In answering the question, Jens, Lars, and team re-imagined email and instant-messaging in a connected world, a world in which messages no longer need to be sent from one place to another, but could become a conversation in the cloud. Effectively, a message (a wave) is a shared communications space with elements drawn from email, instant messaging, social networking, and even wikis.

It’s obvious that O'Reilly is pretty chuffed about Google Wave. He thinks it’s great that in Wave “conversations become shared documents.” “I love the way Wave doesn't just build on what went before but starts over. In demonstrating the power of the shared, real-time information space, Jens and Lars show a keen understanding of how the cloud changes applications.” Okay. I guess Wave could be pretty interesting, though to me it doesn't seem as game-changing and world-changing as O’Reilly and the Rasmussens claim. But we’ll see how it works out. My larger concern is this: O’Reilly is among the leaders of a group of technophiles and technocrats whose one concern with every information technology is: How can this be more social? The primary purpose of Wave seems to be to make communications networks more extensive, to create more and more and more nodes. But there are other things that communications can do than generate more points of intersection. I tend to think that among email, IM, Facebook, Twitter, FriendFeed, shared bookmarks on Delicious, shared RSS feeds on Google Reader, and [insert your favorite social technology here] we already have enough nodes. We already have enough shared information. Instead of asking how our existing information technologies can do more and more of what they already do well, why don't we ask what they’re not doing well — or at all?