testing intelligence — or testing nothing?

Tim Wu suggests an experiment:

A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.

The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing — difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.

Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity” — a point where it had gained an intelligence beyond the understanding of the 1914 mind.

The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.

No matter which side you take in this argument, you should take note of its terms: that “intelligence” is a matter of (a) calculation and (b) information retrieval. The only point at which the experiment even verges on some alternative model of intelligence is when Wu mentions a question about God’s omnipotence and omnibenevolence. Presumably the woman would do a Google search and read from the first page that turns up.

But what if the visitor from 1914 asks for clarification? Or wonders whether the arguments have been presented fairly? Or notes that there are more relevant passages in Aquinas that the woman has not mentioned? The conversation could come to a sudden and grinding stop, the illusion of intelligence — or rather, of factual knowledge — instantly dispelled.

Or suppose that the visitor says that the question always reminds him of the Hallelulah Chorus and its invocation of Revelation 19:6 — “Alleluia: for the Lord God omnipotent reigneth” — but that that passage rings hollow and bitter in his ears since his son was killed in the first months of what Europe was already calling the Great War. What would the woman say then? If she had a computer instead of a smartphone she could perhaps see if Eliza is installed — or she could just set aside the technology and respond as an empathetic human being. Which a machine could not do.

Similarly, what if the visitor had simply asked “What is your favorite flavor of ice cream?” Presumably then the woman would just answer his question honestly — which would prove nothing about anything. Then we would just have a person talking to another person, which we already know that we can do. “But how does that help you assess intelligence?” cries the exasperated experimenter. What’s the point of having visitors from 1914 if they’re not going to stick to the script?

These so-called “thought experiments” about intelligence deserve the scare-quotes I have just put around the phrase because they require us to suspend almost all of our intelligence: to ask questions according to a narrowly limited script of possibilities, to avoid follow-ups, to think only in terms of what is calculable or searchable in databases. They can tell us nothing at all about intelligence. They are pointless and useless.

opting out of the monopolies

At the Technology Liberation Front, Adam Thierer has been reviewing, in installments, Tim Wu’s new book The Master Switch, and has received interesting pushback from Wu. One point of debate has been about the definition of “monopoly”: Wu wants an expansive one, according to which a company can have plenty of competition, and consumers multiple alternatives, and yet that company can still be said to have a monopoly. (Thierer responds here.)

I think Wu’s definition is problematic and not, ultimately, sustainable, but I see and sympathize with his major point. I can have alternatives to a particular service/product/company, and yet find it almost impossible to escape it because of what I’ve already invested in it. When I read stories like this, or talk to friends who work for small presses, I tell myself that I should never deal with Amazon again — and yet I do, in part because buying stuff from Amazon is so frictionless, but also because I have a significant number of Kindle books now, and all those annotations that I can access on the website. . . . I don’t want to lose all that. I can feel my principles slipping away, just as they did when I tried to escape the clutches of Google.

Amazon is not, technically speaking, a monopoly, and neither is Google. But they have monopoly-like power over me — at least for now. And I need to figure out just how problematic that is, and whether I should opt out of their services, and (if so) how to opt out of them, and what to replace them with. . . . Man, modern life is complicated. These are going to be some of the major moral issues of the coming decades: ones revolving around how to deal with services that have a monopolistic role in a given person’s life. Philip K. Dick saw it all coming. . . .

generativity

Jonathan Zittrain has a new book called The Future of the Internet — and How to Stop It. Zittrain’s belief is that we are headed towards a security nightmare, that without major changes in the architecture of the internet a lot of people are going to lose a lot of money through compromises of their online identities. And if that happens, Zittrain believes, there will be a kind of retreat from personal computers to “information appliances,” more specialized machines that do some of the cool stuff we’ve become accustomed to enjoying but that are locked-down in ways that make us less economically and personally vulnerable. (Zittrain thinks that the iPhone, with its closed system and centrally controlled App Store, is the biggest step so far in this direction.) And Zittrain believes that if that happens, if we start to close doors that have been open since the internet got here, we’ll lose a lot of creativity of dialogue and invention — we’ll lose what Zittrain calls “generativity.” There’s a terrific review by Tim Wu of Zittrain’s book in the new New Republic. TNR tends to put some content online temporarily and then later remove it (except for subscribers), so I don’t know how long a link will work, but for now here it is. Wu believes that the development of the internet has a strong parallel nearly a century ago in the development of radio, and the story he tells is fascinating. I’ll leave you with a taste of it:

While it sounds surprising, there were probably more broadcast radio stations in the 1920s than there are now (excluding satellite). A guide to the nation’s stations in 1922 declined to provide listings for New York City, because “a list of all that can be heard with a radio receiver anywhere within three hundred miles of Greater New York would fill a book. At any hour of the day or night, with any type of apparatus, adjusted to receive waves of any length, the listener will hear something of interest.” And early radio, like the early Internet, was aggressively non-commercial. At a radio conference held by the Commerce Department in 1922, all agreed that “direct advertising in radio broadcasting service be absolutely prohibited.” Herbert Hoover, speaking at that conference, declared that “It is inconceivable that we should allow so great a possibility for service, for news, for entertainment, for education, and for vital commercial purposes to be drowned in advertising chatter.” The point is that both radio and film were, in their early days, much like the Internet is today: new, unreliable, and full of content that was not ready for prime time. These were easy industries to get into, like dot-coms in the 1990s or Web 2.0 in the 2000s. To get into film in the 1910s required little more than converting a store into a movie theater, which is how William Fox (20th Century Fox), Adolph Zukor (Paramount), and Carl Laemmle (Universal) got their start. They were low-budget entrepreneurs, the Larry Page (Google) and Pierre Omidyar (eBay) of their day. I do not mean to glorify the age of silent film or local radio. I have watched plenty of silent films, and there is much to be said for sound. I want only to insist that where the Internet is now, we have been before. What Zittrain calls a “generative” media was not invented by the Internet’s founders. And that is why understanding what happened next may be our best guide to the Internet’s future.