Tim Wu suggests an experiment:
A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.
The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing — difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.
Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity” — a point where it had gained an intelligence beyond the understanding of the 1914 mind.
The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.
No matter which side you take in this argument, you should take note of its terms: that “intelligence” is a matter of (a) calculation and (b) information retrieval. The only point at which the experiment even verges on some alternative model of intelligence is when Wu mentions a question about God’s omnipotence and omnibenevolence. Presumably the woman would do a Google search and read from the first page that turns up.
But what if the visitor from 1914 asks for clarification? Or wonders whether the arguments have been presented fairly? Or notes that there are more relevant passages in Aquinas that the woman has not mentioned? The conversation could come to a sudden and grinding stop, the illusion of intelligence — or rather, of factual knowledge — instantly dispelled.
Or suppose that the visitor says that the question always reminds him of the Hallelulah Chorus and its invocation of Revelation 19:6 — “Alleluia: for the Lord God omnipotent reigneth” — but that that passage rings hollow and bitter in his ears since his son was killed in the first months of what Europe was already calling the Great War. What would the woman say then? If she had a computer instead of a smartphone she could perhaps see if Eliza is installed — or she could just set aside the technology and respond as an empathetic human being. Which a machine could not do.
Similarly, what if the visitor had simply asked “What is your favorite flavor of ice cream?” Presumably then the woman would just answer his question honestly — which would prove nothing about anything. Then we would just have a person talking to another person, which we already know that we can do. “But how does that help you assess intelligence?” cries the exasperated experimenter. What’s the point of having visitors from 1914 if they’re not going to stick to the script?
These so-called “thought experiments” about intelligence deserve the scare-quotes I have just put around the phrase because they require us to suspend almost all of our intelligence: to ask questions according to a narrowly limited script of possibilities, to avoid follow-ups, to think only in terms of what is calculable or searchable in databases. They can tell us nothing at all about intelligence. They are pointless and useless.
I don't think it is useless. I agree that it is not as much about intelligence as information. But the central point, that the visitor would be astounded by what we can do with an internet connection and a cell phone is still true. Heck, I am still astounded by what I can do with a cell phone and an internet connection.
Comments are back on? Cool!
Unlike Adam, I am not at all gobsmacked by the ease of information retrieval. Big deal, if one can't contextualize or synthesize or otherwise process with intelligence. It goes beyond the pointlessness of a modifed Turing Test cum thought experiment, though. We mistake the tools of education for education itself and become tools ourselves, as in "what a tool!" Preposterously easy to observe just walking down the street and seeing all the screenheads.
Reading you writing about technology on this blog has been such a pleasure. Thank you.
I think such thought experiments are very useful, and you've pointed out why. Like trolley problems in moral philosophy, the thought experiment fails to deliver what it promises, but it does an excellent job revealing biases and precisely pinpointing areas of confusion.
I love how Borges makes the point in
On Exactitude in Science.
There are two more fundamental flaws I see in Wu's reasoning, which perhaps is to be expected of a modern man who conceives of intelligence as mere information retrieval rather than insight. First, the average denizen of the 21st c. wouldn't even _think_ of looking up that information, let alone be able to offer "theoretically complex answers," because that requires qualities of curiosity, humility, awareness of one’s own limitations, broad reading, and depth of analysis that require intellectual training. Even the ability to find the best sources for said information or arguments is itself a skill that the majority of people don't bother to hone, and it is a skill that one must use to approach the computer screen, not one that can be derived from it.
Second, the 21st c. woman _herself_ is not more intelligent. Take away her access to the internet and what will be left? What is the state of her mind? Wu’s claim is akin to saying we are faster than our ancestors because we have cars—but if you take away our automobiles, the average modern probably has less speed and stamina than our relatives even a few generations back who were generally more physically active in their quotidian lives. I would be willing to bet that in terms of sheer understanding of nature and all the creatures in it, and the ability to describe it all in mesmerizing detail, a 21st c. tech-savvy American would be quickly outshone by an ancient Greek rhapsode who had the entirety of Homer, Aeschylus, Aristotle, etc., sizzling in his synapses, not to mention a life of lived experience among his fellow creatures. Sure, he wouldn't be able to describe creatures we now know only through the power of microscopes or the ability to visit the ocean floor, but he could tell us things about sheep and sun and trees, what is safe to eat, what leaves treat what ailments; he could navigate by the stars, tell time by the sun, follow the trail of an animal, feed and house and clothe himself and his family. How common are such skills among us now?
A lovely and smart post, Alan. I'm reminded not just of our 2006 New Atlantis essay "The Trouble with the Turing Test," but also of something our New Atlantis colleague Steve Talbott wrote back in 2002 in his "Netfuture" newsletter:
"Imagine the potentials of our future if we cultivated an ever higher art of conversation with even a fraction of the energy and social investment we now commit to coaxing new programmed tricks from our computers! … We will, so the story goes, first invest our machines with very simple emotions and intentions, and then we will progressively deepen and refine our investment, ultimately fathering even a sense of right and wrong in our robotic offspring. And yet, what seems to excite so many people about this story is the machine's increasing sophistication, not the fact that, if the story were true, then we ourselves as creators would have had to master the essence of feeling, will, and moral responsibility. Of course, there's good reason for not attending very seriously to this latter implication, since such mastery is not much in evidence. This raises the obvious question: what delusions are we suffering when we imagine ourselves creating from scratch the very capacities that, in our own case, we have scarcely yet begun to develop consciously or harness to our own purposes?"
Thanks for enabling comments!
Another potentially interesting angle of inquiry is *where* this purported increase in intelligence — or, more accurately, knowledge — comes from.
Simply put, it is generated by other people, who have elected to share it. Someone else has written that Wikipedia page, crunched those numbers, written that app, snapped those photos, transliterated those foreign alphabets.
So our "intelligence" hasn't been displaced into "technology" in some mystical way; it's been displaced into a large number of other human beings. That's not really anything new or different from wandering down to the Alexandrian marketplace and asking Euclid what he thinks about circles, or listening as your dad shows you how to herd goats.
What is a little bit different — but in degree rather than kind — is in how thoroughly our knowledge has also been displaced into the *past*. To the degree that we do have far more knowledge, it's simply because we have more past knowledge to draw from. We've got Euclid, and Descartes, and Leibniz, and Penrose, and what's been built into our Mathematica software. Standing on the shoulders of giants, and all that. We're not really wise; we're just so *old* collectively.
It's pretty cool to think about, but for some reason we have this tendency — perhaps it's a classic religious impulse to idolatry — to identify the power as residing intrinsically in our tools, rather than coming to us from people in the past, and only *through* our shiny new tools. It seems to me that Medieval and early-modern writers were more conscious of our debt to the past than we are, which I'll bet protected them from some of the popular cognitive errors of our time.
Comments are closed.