starting from zero!

Alphabet’s Sidewalk Labs project is “reimagining cities to improve quality of life.” But what might “quality of life” actually mean? As Emily Badger notes in a recent essay about the tech visionaries of the urban, 

It’s … unclear what you’d optimize an entire city for. Technologists describe noble aspirations like “human flourishing” or “quality of life.” But noble goals come into conflict within cities. You could optimize for affordable housing, but then you may create a more crowded city than many residents want. You could design a city so that every home receives sunlight (an idea the Chinese tried). But that might mean the city isn’t dense enough to support diverse restaurants and mass transit. 

It’s also not clear from her essay whether the Sidewalk Labs people are genuinely thinking about these issues. Badger quotes the CEO, Dan Doctoroff: 

“The smart city movement as a whole has been disappointing in part because it is hard to get stuff done in a traditional urban environment,” Mr. Doctoroff said. “On the other hand, if you’re completely disrespectful of the urbanist tradition, I don’t think it’s particularly replicable. And it’s probably pretty naïve.”

What counts as being “disrespectful of the urbanist tradition”? (Also, is there only one urbanist tradition?) What is the “it” that isn’t particularly replicable? I find myself wishing that Badger had pressed for clarification here. Because it sounds like this is going to be a typical Google strategy: find a sandbox — in this case in Toronto, “800 acres of underused waterfront that could be reimagined as a neighborhood, if not a full metropolis, with driverless cars, prefabricated construction and underground channels for robot deliveries and trash collection” — and set the ship’s course straight for Utopia. 
In other words: Silicon Valley’s reincarnation of the Bauhaus. From Tom Wolfe’s not-always-fair-but-always-funny From Bauhaus to Our House

The young architects and artists who came to the Bauhaus to live and study and learn from the Silver Prince [Walter Gropius] talked about “starting from zero.” One heard the phrase all the time: “starting from zero.” Gropius gave his backing to any experiment they cared to make, so long as it was in the name of a clean and pure future. Even new religions such as Mazdaznan. Even health-food regimens. During one stretch at Weimar the Bauhaus diet consisted entirely of a mush of fresh vegetables. It was so bland and fibrous they had to keep adding garlic in order to create any taste at all. Gropius’ wife at the time was Alma Mahler, formerly Mrs. Gustav Mahler, the first and foremost of that marvelous twentieth-century species, the Art Widow. The historians tell us, she remarked years later, that the hallmarks of the Bauhaus style were glass corners, flat roofs, honest materials, and expressed structure. But she, Alma Mahler Gropius Werfel — she had since added the poet Franz Werfel to the skein — could assure you that the most unforgettable characteristic of the Bauhaus style was “garlic on the breath.” Nevertheless! — how pure, how clean, how glorious it was to be … starting from zero!
Marcel Breuer, Ludwig Mies van der Rohe, Lázló Moholy-Nagy, Herbert Bayer, Henry van de Velde — all were teachers at the Bauhaus at one time or another, along with painters like Klee and Josef Albers. Albers taught the famous Bauhaus Vorkurs, or introductory course. Albers would walk into the room and deposit a pile of newspapers on the table and tell the students he would return in one hour. They were to turn the pieces of newspaper into works of art in the interim. When he returned, he would find Gothic castles made of newspaper, yachts made of newspaper, airplanes, busts, birds, train terminals, amazing things. But there would always be some student, a photographer or a glassblower, who would simply have taken a piece of newspaper and folded it once and propped it up like a tent and let it go at that. Albers would pick up the cathedral and the airplane and say: “These were meant to be made of stone or metal — not newspaper.” Then he would pick up the photographer’s absentminded tent and say: “But this! — this makes use of the soul of paper. Paper can fold without breaking. Paper has tensile strength, and a vast area can be supported by these two fine edges. This! — is a work of art in paper.” And every cortex in the room would spin out. So simple! So beautiful … It was as if light had been let into one’s dim brain for the first time. My God! — starting from zero!

But those guys failed because they didn’t know what to do when they got to zero. We’re cool, though, because we have A/B testing now

Should Computers Replace Physicians?

In 2012, at the Health Innovation Summit in San Francisco, Vinod Khosla, Sun Microsystems co-founder and venture capitalist, declared: “Health care is like witchcraft and just based on
tradition.” Biased and fallible physicians, he continued, don’t use enough science or data — and thus machines will someday rightly replace 80 percent of doctors. Earlier that same year, Khosla had penned an article for TechCrunch in which he had made a similar point.
With the capacity to store and analyze every single biological detail, computers would soon outperform human doctors. He writes, “there are three thousand or more metabolic pathways, I was once told, in the human body and they
impact each other in very complex ways. These tasks are perfect for a computer to model as ‘systems biology’ researchers are trying to do.” In Khosla’s
vision of the future, by around 2022 he expects he will “be able to ask Siri’s great great grandchild (Version 9.0?) for an opinion far more accurate than the one I get
today from the average physician.” In May 2014,

Khosla reiterated his assertion that computers will replace most doctors
. “Humans are not good when 500 variables affect a disease. We can handle three to five to seven, maybe,” he said. “We are guided too much by opinions, not by
statistical science.”

The dream of replacing doctors with advanced artificial intelligence is unsurprising, as talk of robots replacing human workers in various fields — from eldercare to taxi driving — has become common. But is Vinod Khosla right about medicine? Will we soon
walk into clinics and be seen by robot diagnosticians who will cull our health information, evaluate our symptoms, and prescribe a treatment? Whether or not the technology will exist is difficult to predict, but we are certainly on our way there. The IBM
supercomputer Watson is already being used in some hospitals to help diagnose cancer and recommend treatment, which it does by sifting through millions of patient records and producing treatment options based on previous outcomes. Analysts at Memorial Sloan Kettering Cancer Center are training Watson “to extract and interpret
physician notes, lab results, and clinical research.” All this is awe-inspiring. Let us generously assume, then, for a moment, that the technology for Khosla’s future will be
available and that all knowledge about and treatment options for medical problems will be readily analyzable by a computer within the next decade or so. If this is the future, why
shouldn’t physicians be replaced?

There are several errors in Khosla’s way of thinking about this issue. First of all, modern health care is not “like witchcraft.” Academic
physicians, for example, use evidence-based medicine whenever it is available.
And when it isn’t, then they try to reason through a problem using what biologists know about disease presentation, physiology, and pharmacology.

Moreover, Khosla mischaracterizes the doctor-patient interaction. For Khosla, a visit to the doctor involves “friendly banter” and questions about symptoms. The
doctor then assesses these symptoms, “hunts around … for clues as to their source, provides the diagnosis, writes a prescription, and sends you off.” In Khosla’s estimation the entire
visit “should take no more than 15 minutes and usually takes probably less than that.” But the kind of visit Khosla writes about is an urgent care visit wherein quick and minor issues are addressed: strep throat or a small laceration requiring a
stitch or two. Yes, these visits can take fifteen minutes, but so much of medicine does not involve these brief interactions. Consider the diabetic
patient who has poorly controlled blood sugars, putting her at risk for stroke, heart attack, peripheral nerve destruction, and kidney failure, but who hasn’t
been taking her medications. Or consider a patient addicted to cigarettes or on the verge of alcoholism. Consider the patient with Parkinson’s disease who wonders how this new diagnosis
will affect his life. And what about the worried parents who want antibiotics for their child even though their child has a viral infection and not a
bacterial infection? I can go on and on with scenarios like these, which occur hourly, if not daily, in nearly every medical specialty. In fact,
fifteen-minute visits are the exception to the kind of medicine most physicians need to practice. One cannot convince an alcoholic to give up alcohol, get
a diabetic patient to take her medications, or teach a Spanish-speaking patient to take his pills correctly in fifteen minutes. In addition, all this is impossible without “friendly banter.”

As Dr. Danielle Ofri, an associate professor of medicine at the New York University School of Medicine,

wrote in a New York Times blog post, compliance with blood pressure medications or diabetic medications is extremely difficult, involving multiple factors:

Besides obtaining five prescriptions and getting to the pharmacy to fill them (and that’s assuming no hassles with the insurance company, and that the
patient actually has insurance), the patient would also be expected to cut down on salt and fat at each meal, exercise three or four times per week, make
it to doctors’ appointments, get blood tests before each appointment, check blood sugar, get flu shots — on top of remembering to take the morning pills
and then the evening pills each and every day.

Added up, that’s more than 3,000 behaviors to attend to, each year, to be truly adherent to all of the
doctor’s recommendations.

Because of the difficulties involved in getting a patient to comply with a complex treatment plan, Dr. John Steiner argues in an article in the Annals of Internal Medicine that in
order to be effective we must address individual, social, and environmental factors:

Counseling with a trusted clinician needs to be complemented by outreach interventions and removal of structural and organizational barriers. …[F]ront-line clinicians, interdisciplinary teams, organizational leaders, and policymakers will need to coordinate efforts in
ways that exemplify the underlying principles of health care reform.

Therefore, the interaction between physician and patient cannot be dispensed with in fifteen minutes. No, the relationship involves, at minimum, a
negotiation between what the doctor thinks is right and what the patient is capable of and wants. To use the example of the diabetic patient, perhaps the
first step is to get the patient to give up soda for water, which will help lower blood sugars, or to start walking instead of driving, or taking the
stairs instead of the elevator. We make small suggestions and patients make small compromises in order to change for the better — a negotiation that helps
patients improve in a way that is admittedly slow, but necessarily slow. This requires the kind of give-and-take that we naturally have in relationships with other people, but not with computers.

This kind of interaction also necessitates trust — trust regarding illicit drugs, alcohol, tobacco, and sexual activity, all of which can contribute to or
cause certain medical problems. And a computer may ask the questions but cannot earn a patient’s confidence. After all, these kinds of secrets can only be
exchanged between two human beings. David Eagleman, a neuroscientist at the Baylor College of Medicine, writes in his book Incognito that when we reveal a secret, we almost always feel that “the receiver of the secrets
has to be human.” He wonders why, for example, “telling a wall, a lizard or a goat your secrets is much less satisfying.” As patients, we long for that human reception
and understanding that a physician can provide and use to our advantage in coming up with a diagnosis.

Khosla neglects other elements of medical care, too. Implicit in his comments is the idea that the
patient is a consumer and the doctor a salesman. In this setting, the patient buys health in the same way that he or she buys corn on the cob. One doesn’t need friendly banter or a packet of paperwork to get the best corn, only a short visit to the
grocery store.

And yet, issues of health are far more serious than buying produce. Let’s take the example of a mother who brings her child in for ADHD medication, a
scenario I’ve seen multiple times. “My child has ADHD,” she says. “He needs Ritalin to help his symptoms.” In a consumer-provider scenario, the doctor gives the
mother Ritalin. This is what she wants; she is paying for the visit; the customer is king. But someone must explain to the mother what ADHD
is and whether her child actually has this disorder. There must be a conversation about the diagnosis, the medication, and its side effects, because the consequences of these are lifelong. Machines would have to be more than just clerks. In many instances, they would have to convince the parent that, perhaps, her child does not have
ADHD; that she should hold off on medications and schedule a follow-up to see how the child is doing. Because the exchange of goods in
medicine is so unique, consequential, and rife with emotion, it is not just a consumer-cashier relationship. Thus computers, no matter how
efficient, are ill-fitted to this task.

Khosla also misunderstands certain treatments, which are directly based on human interactions. Take psychiatry for example. We know that

cognitive behavioral therapy and medication combined are the best treatment for a disease like depression
. And cognitive behavioral therapy has at its core the relationship between the
psychiatrist or therapist and the patient, who together work through a depressed patient’s illness during therapy sessions. In cognitive behavioral therapy, private
aspects of life are discussed and comfort is offered — human expressions and emotions are critical for this mode of treatment.


To be sure, Khosla is right about quite a lot. Yes, technology ought to make certain aspects of the patient visit more efficient. Our vital signs may one day easily be taken with the help of our mobile phones, as he suggests, which
would save time checking in to a clinic and could help give physicians constant and accurate measurements of blood pressure in hypertensive patients or EKG
recordings in patients with heart disease. Technology of this sort could also indicate when an emergency is happening or how a patient ought to alter medication
doses.
Furthermore, Khosla correctly identifies some of the limitations of human physicians: “We cannot expect our doctor to be able to remember everything from medical
school twenty years ago or memorize the whole Physicians Desk Reference (PDR) and to know everything from the latest research, and so on and so forth.”
True, the amount of information accumulated by modern medical research is beyond the capability of any human being to know, and doctors do make mistakes because they forget or are not up on the latest research. In a 2002 study in the Journal of Neurology, Neurosurgery and Psychiatry, investigators found that 15 percent of patients with a diagnosis of Parkinson’s disease do not
necessarily fulfill criteria for the disease and 20 percent of patients with Parkinson’s disease who have already seen medical providers have not been diagnosed.
These are large percentages that have profound implications for people’s lives. And this is exactly why physicians must use technologies like Watson to do a
better job, not necessarily abdicate the job altogether. Most of us already carry smartphones or tablets on rounds, to look up disease processes or confirm
our choice of antibiotic.
Lastly, Khosla wisely points out that physician bias can negatively affect a patient’s treatment. As he writes, “a physician’s bias makes all these
personal decisions for patients in a majority of the cases without the patient (or sometimes even the physician) realizing what ‘preferences’ are being
incorporated into their recommendations. The situation gets worse the less educated or economically less well-off the patient is, such as in developing
countries, in my estimation.” Undoubtedly, this dilemma is real. I have spent many of my posts on this blog writing about the issue of remaining unbiased or level-headed in the face of difficult patient interactions.
study published in Obesity in 2013 found that physicians “demonstrated less emotional rapport with overweight and obese patients … than for normal weight patients,” which may
“weaken the patient-physician relationship, diminish patients’ adherence to recommendations, and decrease the effectiveness of behavior change counseling.”
And

as Tara Parker-Pope remarks in the New York Times
, “studies show that patients are far more likely to follow a doctor’s advice and to have a better health outcome when they believe their doctor empathizes
with their plight.” If bias exists in lieu of empathy, it makes sense that patients have worse outcomes. What makes doctors most valuable,
their humanity, can have negative consequences.
But people can learn from studies, alter their behavior, and remain human. Computers or robots can learn from studies and alter their behavior, but they will
always be robots. They will never earn the trust of the chronically ill drug addict. They will never be able to negotiate with the most difficult patients
who demand specific treatments but may not be entirely sure why. An ideal system would not be one built solely on fallible human doctors but one
in which new tools significantly augment human physicians’ skill and knowledge. A measured combination of these will put all the information at a doctor’s
fingertips while keeping the art of medicine alive.

from coal to pixels

This is the Widows Creek power plant on the Tennessee River in Alabama, soon to become a Google data center. Or Google will use the site, anyway — I’m not sure about the future of the buildings. Big chunks of riverfront land are highly desirable to any company that processes a lot of data, because the water can be circulated through the center to help cool the machines that we overheat with photos and videos.

But there are enormous coal plants throughout America that can’t be so readily repurposed, and the creativity devoted to remaking them is quite remarkable: here’s an MIT Technology Review post on the subject.

art as industrial lubricant

Holy cow, does Nick Carr pin this one to the wall. Google says, “At any moment in your day, Google Play Music has whatever you need music for — from working, to working out, to working it on the dance floor — and gives you curated radio stations to make whatever you’re doing better. Our team of music experts, including the folks who created Songza, crafts each station song by song so you don’t have to.”

Nick replies:

This is the democratization of the Muzak philosophy. Music becomes an input, a factor of production. Listening to music is not itself an “activity” — music isn’t an end in itself — but rather an enhancer of other activities, each of which must be clearly demarcated….  

Once you accept that music is an input, a factor of production, you’ll naturally seek to minimize the cost and effort required to acquire the input. And since music is “context” rather than “core,” to borrow Geoff Moore’s famous categorization of business inputs, simple economics would dictate that you outsource the supply of music rather than invest personal resources — time, money, attention, passion — in supplying it yourself. You should, as Google suggests, look to a “team of music experts” to “craft” your musical inputs, “song by song,” so “you don’t have to.” To choose one’s own songs, or even to develop the personal taste in music required to choose one’s own songs, would be wasted labor, a distraction from the series of essential jobs that give structure and value to your days. 

Art is an industrial lubricant that, by reducing the friction from activities, makes for more productive lives.

If music be the lube of work, play on — and we’ll be Getting Things Done.

the Baconians of Mountain View

One small cause of satisfaction for me in the past few years has been the decline of the use of the word “postmodern” as a kind of all-purpose descriptor of anything the speaker thinks of as recent and different. The vagueness of the term has always bothered me, but even more the lack of historical awareness embedded in most uses of it. I have regularly told my students that if they pointed to a recent statement that they thought of as postmodern I could almost certainly find a close analogue of it from a sixteenth-century writer (often enough Montaigne). To a great but unacknowledged degree, we are still living in the fallout from debates, especially debates about knowledge, that arose more than four hundred years ago.

One example will do for now. In what became a famous case in the design world, five years ago Doug Bowman left Google and explained why:

When I joined Google as its first visual designer, the company was already seven years old. Seven years is a long time to run a company without a classically trained designer. Google had plenty of designers on staff then, but most of them had backgrounds in CS or HCI. And none of them were in high-up, respected leadership positions. Without a person at (or near) the helm who thoroughly understands the principles and elements of Design, a company eventually runs out of reasons for design decisions. With every new design decision, critics cry foul. Without conviction, doubt creeps in. Instincts fail. “Is this the right move?” When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data. Data in your favor? Ok, launch it. Data shows negative effects? Back to the drawing board. And that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions.

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’re testing 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

What Bowman thought of as a bug — “data [was] paralyzing the company and preventing it from making any daring design decisions” — the leadership at Google surely thought of as a feature. What’s the value of “daring design decisions”? We’re trying to get clicks here, and we can find out how to achieve that.

With that story in mind, let’s turn to Michael Oakeshott’s great essay “Rationalism in Politics” and his account therein of Francis Bacon’s great project for setting the quest for knowledge on a secure footing:

The Novum Organum begins with a diagnosis of the intellectual situation. What is lacking is a clear perception of the nature of certainty and an adequate means of achieving it. ‘There remains,’ says Bacon, ‘but one course for the recovery of a sound and healthy condition — namely, that the entire work of understanding be commenced afresh, and the mind itself be from the very outset not left to take its own course, but guided at every step’. What is required is a ‘sure plan’, a new ‘way’ of understanding, an ‘art’ or ‘method’ of inquiry, an ‘instrument’ which (like the mechanical aids men use to increase the effectiveness of their natural strength) shall supplement the weakness of the natural reason: in short, what is required is a formulated technique of inquiry….

The art of research which Bacon recommends has three main characteristics. First, it is a set of rules; it is a true technique in that it can be formulated as a precise set of directions which can be learned by heart. Secondly, it is a set of rules whose application is purely mechanical; it is a true technique because it does not require for its use any knowledge or intelligence not given in the technique itself. Bacon is explicit on this point. The business of interpreting nature is ‘to be done as if by machinery’, ‘the strength and excellence of the wit (of the inquirer) has little to do with the matter’, the new method ‘places all wits and understandings nearly on a level’. Thirdly, it is a set of rules of universal application; it is a true technique in that it is an instrument of inquiry indifferent to the subject-matter of the inquiry.

It is hard to imagine a more precise and accurate description of the thinking of the Baconians of Mountain View. They didn’t want Bowman’s taste or experience. He might have been the most gifted designer in the world, but so what? “The strength and excellence of the wit (of the inquirer) has little to do with the matter.” Instead, decisions are “to be done as if by machinery” — no, strike that, they are to be done precisely by machinery and only by machinery. Moreover, there is no difference in technique between a design decision and any other kind of decision: the method of letting the data rule “is an instrument of inquiry indifferent to the subject-matter of the inquiry.”

Oakeshott’s essay provides a capsule history of the rise of Rationalism as a universal method of inquiry and action. It focuses largely on Bacon and Descartes as the creators of the Rationalist frame of mind and on their (less imaginative and creative) successors. It turns out that an understanding of seventeenth-century European thought is an indispensable aid to understanding the technocracy of the twenty-first century world.

The Taco-larity is Near

Folks, prepare yourselves for the yummy, inevitable, yummy taco-pocalypse. So said the news last week, anyway, which saw an exponential growth in taco-related headlines. Three items:1. A new startup called TacoCopter has launched in the San Francisco area. It beats robotic swords into ploughshares, turning unmanned drones into airborne taco-delivery vehicles. Tacos are choppered in to your precise coordinates, having been ordered — yes — from your smartphone.2. Google’s self-driving car is turning from project into practical reality. Google last week released a video of its car being used by a man with near-total vision loss to get around. His destinations? The dry cleaner and Taco Bell.3. But beware: tacos may not always be used for good. In response to the arrest of four police officers in East Haven, Connecticut on charges of harassment and intimidation of Latino businesspeople, the mayor of the town was asked by a local reporter what he was going to do for the Latino community. His response: “I might have tacos when I go home; I’m not quite sure yet.” Watch the comment, followed by four minutes of exquisitely awkward backpedaling and attempts to celebrate all colors of the rainbow. It puts Michael Scott to shame.Okay, so the last of those isn’t really about the future. Also, it turns out the taco-copter was a hoax. Well, phoo. Scientific progress goes boink.

The Revolution Will Be Advertisement

More on augmented reality, from Jeff Bercovici at Forbes:

So far, Google has only scratched the surface of the advertising potential here. That makes sense: How many times in your life are you actually going to point your phone at an ad?
Google glasses could change all that. Now the user doesn’t have to point his phone at an ad to activate the AR [augmented reality] layer — he only has to look at it. Combine that with location data and all the other rich targeting information Google has at its disposal and you’re talking about potentially the most valuable advertising medium ever invented.
Imagine it: You’re walking home from work. You put on your Google Glasses to check your email and notice that the sushi place across the street, where you frequently go for takeout, is highlighted. In the window is a glowing icon that lets you know there’s a discount available. A tiny tilt of your head brings up the offer: 40% off any purchase plus free edamame. With a bit more tilting and nodding, you place your order. By the time you cross the street, it’s ready for you. Would you like to pay via Google Wallet?
You nod.
In unrelated news, Ben Goertzel thinks that corporations “are directly and clearly opposed to the Singularity.”

The Future Gets In Your Eyes

The Interwebs is all atwitter over yesterday’s report in the New York Times that by the end of the year, Google will be selling “a pair of Google-made glasses that will be able to stream information to the wearer’s eyeballs in real time.” And just imagine if all those a-tweets could be beamed right into your eyeballs!Speaking of the future, Nevada has become the first state to legalize and regulate self-driving cars. These are two fronts in the same technological push to fundamentally (or impactfully, as transhumanists say) transform the way we are present in the physical world — a push we can already see underway with the rise of GPS and location-awareness technologies.For interested readers, I meditated on this transformation in an essay for The New Atlantis last year called “GPS and the End of the Road.” Here’s a relevant snippet:

The highest-end smartphones come enabled not only with GPS, but with video cameras, and with sensors that enable the phone to know where it is pointing. Combining these abilities, augmented-reality applications allow you to hold up your smartphone to, say, an unfamiliar city street, of which it will show you a live video feed, with hovering information boxes over points of interest showing you customer reviews, historical data, photographs, coupons, advertisements, and the like. One such augmented-reality app is called Layar because it allows you to see reality “layered” over, either with fanciful images or with helpful bubbles of information telling you what to see and why. Proposals are in the works to display such information on glasses or contact lenses, eliminating even the burden of holding up one’s arm.The great and simple promise of these technologies is to deliver to us the goods of finding things in the world in the most efficient way possible. After Brad Templeton: their feature is to find the most interesting things in the world, and to explain why they are interesting, while eliminating the apparent bug that most of the things we encounter seem pretty boring. Moreover, location awareness and augmented reality, paired with GPS navigation, transmit us to these interesting places with the minimum possible requirement of effort and attention paid to the boring places that intervene. We can get where we’re going, and see what we want to see, without having to look….

Augmented reality image via Google via NYT

How to Solve the Future

Google has set up a new program called Solve for X. In the clear and concise words of the site, Solve for X

is a place to hear and discuss radical technology ideas for solving global problems. Radical in the sense that the solutions could help billions of people. Radical in the sense that the audaciousness of the proposals makes them sound like science fiction. And radical in the sense that there is some real technology breakthrough on the horizon to give us all hope that these ideas could really be brought to life.

The site already has posted a number of videos that are forays into the “moonshot” thinking the program hopes to encourage, including one typically intelligent and provocative talk by author Neal Stephenson.Those of us who follow the world of transhumanism may be a bit surprised to find that anyone thinks there is a lack of audacious and radical thinking about the human future in the world today. Stephenson is a bit more cautious in his talk, arguing instead that at the moment there seems to be a lack of effort to do big things, contrasting unfavorably the period from around 1968 to the present with the extraordinary transformations of human thinking and abilities that took place between 1900 (the dawn of aviation) and the Moon landing.(It’s not quite clear why Stephenson picks 1968 as the dividing year, instead of the year of the first moon landing (1969), or the last (1972). Perhaps it makes sense if you consider that the point at which it was clear we were going to beat the Russians to the Moon was the point at which enthusiasm for efforts beyond that largely evaporated among the people who held the purse strings — meaning American lawmakers as well as the public.)—At any rate, Stephenson attributes at least some of that lack of effort to a paucity of imagination. He thus calls for deliberate efforts by science fiction writers to cooperate with technically minded people in writing what could be inspiring visions of the future for the rising generation.There is a good deal that might be said about his argument, and perhaps I will write more about in later posts. For the moment, I would just like to note that, even accepting his premise about the paucity of big thinking and big effort today, Stephenson’s prescription for remedying it is odd, considering his own accomplishments. It’s not as if the nanotechnology world of his brilliant novel The Diamond Age: Or, a Young Lady’s Illustrated Primer is an uninspiring dead letter.The same of course goes for many of the futuristic promises of classic science fiction, but in Diamond Age, Stephenson presented his science fiction world with an unusual moral realism that one might have thought would make it all the more inspiring to all but the most simplistically inclined. Perhaps it is modesty that prevented him from putting forward his own existing work as a model.—Yet by ignoring what he achieved in Diamond Age, Stephenson also overlooks another way of looking at the problem he sets up in the achievement gap between 1900–1968 and 1968–now. For the book is premised in part on the belief that history exhibits pendulum swings. Should we really be surprised if a time of revolution is followed by a period of reaction and/or consolidation?Believers in the Singularity would, of course, be surprised if this were the case. But they are attempting to suggest the existence of a technological determinism that Stephenson wisely avoided in Diamond Age. But he was swimming against the tide; it is striking just how much of the science fiction of the first two-thirds of the twentieth century was driven by a sense that the future would be molded by some kind of necessity, often catastrophic.For example, overpopulation would force huge urban conglomerations on us, or would be the driver for space colonization. Or the increasing violence of modern warfare would be the occasion for rebuilding the world physically or politically or both.Perhaps we are living in a time of (relative) pause because the realization is dawning that we are not in the grip of historical forces beyond our control. It would take some time to absorb that sobering possibility. It is not too early to attend to the lesson drawn so well in Diamond Age: that at some point the question of what should be done becomes more important than the question of what can be done.

on the plusses and minuses of a social backbone

I don’t get this article by Edd Dumbill. He wants to argue that “The launch of Google+ is the beginning of a fundamental change on the web. A change that will tear down silos, empower users and create opportunities to take software and collaboration to new levels.” He tries to support that bold claim by arguing that Google+ is a big step towards “interoperability”:

Currently, we have all [our social] groups siloed. Because we have many different contexts and levels of intimacy with people in these groups, we’re inclined to use different systems to interact with them. Facebook for gaming, friends and family. LinkedIn for customers, recruiters, sales prospects. Twitter for friends and celebrities. And so on into specialist communities: Instagram and Flickr, Yammer or Salesforce Chatter for co-workers.

The situation is reminiscent of electronic mail before it became standardized. Differing semi-interoperable systems, many as walled gardens. Business plans predicated on somehow “owning” the social graph. The social software scene is filled with systems that assume a closed world, making them more easily managed as businesses, but ultimately providing for an uncomfortable interface with the reality of user need.

An interoperable email system created widespread benefit, and permitted many ecosystems to emerge on top of it, both formal and ad-hoc. Email reduced distance and time between people, enabling rapid iteration of ideas, collaboration and community formation. For example, it’s hard to imagine the open source revolution without email.

Dumbill seems not to have noticed that the various services he mentions, from Facebook to Twitter to Instagram, are already built around an “interoperable system”: it’s called the World Wide Web. Those aren’t incompatible platforms, they are merely services you have to sign up for — just like Google.

Ah, but, “Though Google+ is the work of one company, there are good reasons to herald it as the start of a commodity social layer for the Internet. Google decided to make Google+ be part of the web and not a walled garden.” Well, yes and no. You can see Google+ posts online, if the poster chooses to make them public, but you can’t participate in the conversation without signing up for the service. In other words: just like Facebook, Twitter, Flickr, and so on.

In the end, it seems to me that Dumbill is merely saying that if all of us decide to share all our information with just one service, we’ll have a fantastic “social backbone” for our online lives. And that may be true. Now, can we stop to ask whether there may be any costs to that decision?