Google has set up a new program called Solve for X. In the clear and concise words of the site, Solve for X
is a place to hear and discuss radical technology ideas for solving global problems. Radical in the sense that the solutions could help billions of people. Radical in the sense that the audaciousness of the proposals makes them sound like science fiction. And radical in the sense that there is some real technology breakthrough on the horizon to give us all hope that these ideas could really be brought to life.
The site already has posted a number of videos that are forays into the “moonshot” thinking the program hopes to encourage, including one typically intelligent and provocative talk by author Neal Stephenson.Those of us who follow the world of transhumanism may be a bit surprised to find that anyone thinks there is a lack of audacious and radical thinking about the human future in the world today. Stephenson is a bit more cautious in his talk, arguing instead that at the moment there seems to be a lack of effort to do big things, contrasting unfavorably the period from around 1968 to the present with the extraordinary transformations of human thinking and abilities that took place between 1900 (the dawn of aviation) and the Moon landing.(It’s not quite clear why Stephenson picks 1968 as the dividing year, instead of the year of the first moon landing (1969), or the last (1972). Perhaps it makes sense if you consider that the point at which it was clear we were going to beat the Russians to the Moon was the point at which enthusiasm for efforts beyond that largely evaporated among the people who held the purse strings — meaning American lawmakers as well as the public.)—At any rate, Stephenson attributes at least some of that lack of effort to a paucity of imagination. He thus calls for deliberate efforts by science fiction writers to cooperate with technically minded people in writing what could be inspiring visions of the future for the rising generation.There is a good deal that might be said about his argument, and perhaps I will write more about in later posts. For the moment, I would just like to note that, even accepting his premise about the paucity of big thinking and big effort today, Stephenson’s prescription for remedying it is odd, considering his own accomplishments. It’s not as if the nanotechnology world of his brilliant novel The Diamond Age: Or, a Young Lady’s Illustrated Primer is an uninspiring dead letter.The same of course goes for many of the futuristic promises of classic science fiction, but in Diamond Age,Stephenson presented his science fiction world with an unusual moral realism that one might have thought would make it all the more inspiring to all but the most simplistically inclined. Perhaps it is modesty that prevented him from putting forward his own existing work as a model.—Yet by ignoring what he achieved in Diamond Age, Stephenson also overlooks another way of looking at the problem he sets up in the achievement gap between 1900–1968 and 1968–now. For the book is premised in part on the belief that history exhibits pendulum swings. Should we really be surprised if a time of revolution is followed by a period of reaction and/or consolidation?Believers in the Singularity would, of course, be surprised if this were the case. But they are attempting to suggest the existence of a technological determinism that Stephenson wisely avoided in Diamond Age. But he was swimming against the tide; it is striking just how much of the science fiction of the first two-thirds of the twentieth century was driven by a sense that the future would be molded by some kind of necessity, often catastrophic.For example, overpopulation would force huge urban conglomerations on us, or would be the driver for space colonization. Or the increasing violence of modern warfare would be the occasion for rebuilding the world physically or politically or both.Perhaps we are living in a time of (relative) pause because the realization is dawning that we are not in the grip of historical forces beyond our control. It would take some time to absorb that sobering possibility. It is not too early to attend to the lesson drawn so well in Diamond Age: that at some point the question of what should be done becomes more important than the question of what can be done.
Astronaut Tracy Caldwell Dyson looks through a window of the International Space Station. Image courtesy NASA.
When I was growing up, this image was science fiction. Even now it is not at all clear what kind of future the accomplishments it represents will have. But here is an illustration of an extension of human ability and experience which represents a fuller realization of our inherent human potential. That is surely part of what gives the photograph its beauty. It would be a shame if such a promising start were allowed to go nowhere.
Robotic space exploration is better than no space exploration at all, and the Mars Rovers have proven to be particularly remarkable machines. Those who made and manage them deserve to be proud. The latest news is that the Opportunity rover has, after seven years, traveled a total of a little over twenty miles, some 50 times its design distance.
That’s quite an impressive accomplishment, but it does help to suggest why manned exploration is likely to have real advantages over robotic vehicles (in the present case, a vehicle that is in fact manned at a distance) for some time to come. Let’s imagine that Opportunity, rather than a bunch of Englishmen, had arrived at Jamestown in 1607 and set out to explore the continent. At the rate of twenty miles every seven years, and assuming a good deal of counterfactual geography (i.e. the ability simply to travel as the crow flies) it would be approaching somewhere in the vicinity of Norman, Oklahoma about now.
It’s not just that humans can move faster and cover more ground while on the ground; someday we might cede that advantage to robots. Rather, the human advantage is to be found in the urgency of discovery and the call of the wild, in risk-taking and on-the-scene ingenuity. Such things drive us to press beyond the frontier of the moment. The next few years are unlikely to be kind to man in space, but we’ll know we have a serious manned space program when the astronauts check in with Mission Control whenever they damn please.
According to NASA, a humanoid robot will be sent into space for the first time later this year. Perhaps fittingly enough, it will be on STS-133, the last manned space mission to be launched directly by NASA for the planned future. Wired magazine writes:
James Hughes, who studies emerging technologies at Trinity University [College], suggested that humanoid robots may provide a nice middle ground between hardcore human spaceflight evangelists and those who would rather see robotic missions. Most space watchers feel that the human programs are what drives interest and funding in exploration, while scientific investigation will be driven by robots.
“A humanoid robot splits the difference. You get some of the advantages of both and hopefully it will be a nice compromise between the two,” said Hughes. “But it may not satisfy either side.”
The article also points out that humanoid robots might fulfill some useful functional gaps. That part is reasonable if true. But to say that humanoid robots split the difference between the cases for manned and robotic space travel is rather like looking at the debate over whether to travel to Mars and saying we can split the difference by going halfway there and back. (Don’t insult my intelligence, Kirk.)
Wired Science has a story by Brandon Keim featuring the work of University of Chicago geoscientist Patrick McGuire. McGuire is working on “wearable AI systems and digital eyes that see what human eyes can’t.” So equipped, “space explorers of the future could be not just astronauts, but ‘cyborg astrobiologists.’” That phrase — “cyborg astrobiologist” — comes from the title McGuire and his team gave to the paper reporting their early results. In their paper, they describe developing a “real-time computer-vision system” that has helped them successfully to identify “lichens as novel within a series of images acquired in semi‑arid desert environments.” Their system also quickly learned to distinguish between familiar and novel colored samples.According to Keim, McGuire admits there is a long way to go before we get to the cyborg astrobiologist stage — a point that seems to have been missed by the folks at Wired Science, who gave Keim’s piece the headline “AI Spacesuits Turn Astronauts Into Cyborg Biologists” (note the present tense). But it’s true that the meaning of “cyborg” is contested ground. If Michael Chorost in his fine book Rebuilt (which I reviewed here) can decide that he is a cyborg because he has a cochlear implant, than perhaps those merely testing McGuire’s system are cyborg, too.But my point now isn’t to be one of those sticklers who tries to argue with Humpty Dumpty that it is better if words don’t mean whatever we individually want them to mean. Rather, I’m wondering why McGuire should have used this phrase, “cyborg astrobiologists,” in this recent paper and a number of earlier ones. The word “cyborg” was originally used to describe something similar to what McGuire is attempting, as Adam Keiper has noted:
In 1960, at the height of interest in cybernetics, the word cyborg—short for “cybernetic organism”—was coined by researcher Manfred E. Clynes in a paper he co-wrote for the journal Astronautics. The paper was a theoretical consideration of various ways in which fragile human bodies could be technologically adapted and improved to better withstand the rigors of space exploration. (Clynes’s co-author said the word cyborg “sounds like a town in Denmark.”)
But McGuire doesn’t seem to be aware of the word’s original connection to space exploration — he doesn’t acknowledge it anywhere, as far as I can tell — and instead he seems to be using the word “cyborg” in its more recent and sensationalistic science-fiction-ish sense of part-man, part-machine. So why use that word? The simple answer, I suppose, is that academics are far from immune to the lure of attention-getting titles for their work. But it is still noteworthy that for McGuire and his audience, “cyborg” is apparently something to strive for, not a monstrous hybrid like most iconic cyborgs (think Darth Vader, the Borg, or the Terminators). Deliberately or not, McGuire is engaged in a revaluation of values. One wonders whether in a transhumanist future there will be any “monsters” at all; perhaps that word will share the fate of other terms of distinction that have become outmoded or politically incorrect. “Monster,” after all, implies some norm or standard, and transhumanism is in revolt against norms and standards.Or perhaps the unenhanced human being will become the monster, the literal embodiment of all that right-thinking intelligence rebels against, a dead-end abortion of mere nature. Their obstinate persistence would be fearful if they themselves were not so pitiful. We came from that?