fleshers and intelligences

I’m not a great fan of Kevin Kelly’s brand of futurism, but this is a great essay by him on the problems that arise when thinking about artificial intelligence begins with what the Marxists used to call “false reification”: the belief that intelligence is a bounded and unified concept that functions like a thing. Or, to put Kelly’s point a different way, it is an error to think that human beings exhibit a “general purpose intelligence” and therefore an error to expect that artificial intelligences will do the same.

Kelly opposes to this reifying orthodoxy in AI efforts five affirmations of his own:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

Expanding on that first point, Kelly writes,

Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.

Think, to take just one example, of the acuity with which dogs observe and respond to a wide range of human behavior: they attend to tone of voice, facial expression, gesture, even subtle forms of body language, in ways that animals invariably ranked higher on what Kelly calls the “mythical ladder” of intelligence (chimpanzees, for instance) are wholly incapable of. But dogs couldn’t begin to use tools the way that many birds, especially corvids, can. So what’s more intelligent, a dog or a crow or a chimp? It’s not really a meaningful question. Crows and dogs and chimps are equally well adapted to their ecological niches, but in very different ways that call forth very different cognitive abilities.

If Kelly is right in his argument, then AI research is going to be hamstrung by its commitment to g or “general intelligence,” and will only be able to produce really interesting and surprising intelligences when it abandons the idea, as Stephen Jay Gould puts is in his flawed but still-valuable The Mismeasure of Man, that “intelligence can be meaningfully abstracted as a single number capable of ranking all people [including digital beings!] on a linear scale of intrinsic and unalterable mental worth.”

“Mental worth” is a key phrase here, because a commitment to g has been historically associated with explicit scales of personal value and commitment to social policies based on those scales. (There is of course no logical link between the two commitments.) Thus the argument frequently made by eugenicists a century ago that those who score below a certain level on IQ tests — tests purporting to measure g — should be forcibly sterilized. Or Peter Singer’s view that he and his wife would be morally justified in aborting a Down syndrome child simply because such a child would probably grow up to be a person “with whom I could expect to have conversations about only a limited range of topics,” which “would greatly reduce my joy in raising my child and watching him or her develop.” A moment’s reflection should be sufficient to dismantle the notion that there is a strong correlation between, on the one hand, intellectual agility and verbal fluency and, on the other, moral excellence; which should also undermine Singer’s belief that a child who is deficient in his imagined general intelligence is ipso facto a person he couldn’t “treat as an equal.” But Singer never gets to that moment of reflection because his rigid and falsely reified model of intellectual ability, and the relations between intellectual ability and personal value, disables his critical faculties.

If what Gould in another context called the belief that intelligence is “an immutable thing in the head” which allows “grading human beings on a single scale of general capacity” is both erroneous and pernicious, it is somewhat disturbing to see that belief not only continuing to flourish in some communities of discourse but also being extended into the realm of artificial intelligence. If digital machines are deemed superior to human beings in g, and if superiority in g equals greater intrinsic worth…. well, the long-term prospects what what Greg Egan calls “fleshers” aren’t great. Unless you’re one of the fleshers who controls the machines. For now.

P.S. I should add that I know that people who are good at certain cognitive tasks tend to be good at other cognitive tasks, and also that, as Freddie DeBoer points out here, IQ tests — that is, tests of general intelligence — have predictive power in a range of social contexts, but I don’t think any of that undermines the points I’m making above. Happy to be corrected where necessary, of course.

Kevin Kelly's New Theology

Kevin Kelly’s theology is a contemporary version of the one George Bernard Shaw articulated a hundred years ago. In “The New Theology: A Sermon,” Shaw wrote,

In a sense there is no God as yet achieved, but there is that force at work making God, struggling through us to become an actual organized existence, enjoying what to many of us is the greatest conceivable ecstasy, the ecstasy of a brain, an intelligence, actually conscious of the whole, and with executive force capable of guiding it to a perfectly benevolent and harmonious end. That is what we are working to. When you are asked, “Where is God? Who is God?” stand up and say, “I am God and here is God, not as yet completed, but still advancing towards completion, just in so much as I am working for the purpose of the universe, working for the good of the whole of society and the whole world, instead of merely looking after my personal ends.” In that way we get rid of the old contradiction, we begin to perceive that the evil of the world is a thing that will finally be evolved out of the world, that it was not brought into the world by malice and cruelty, but by an entirely benevolent designer that had not as yet discovered how to carry out its benevolent intention. In that way I think we may turn towards the future with greater hope.

We might compare this rhetoric to that of Kelly’s new essay in Wired, which begins with a classic Borg Complex move: “We’re expanding the data sphere to sci-fi levels and there’s no stopping it. Too many of the benefits we covet derive from it.” But if resistance is futile, that’s no cause for worry, because resistance would be foolish.

It is no coincidence that the glories of progress in the past 300 years parallel the emergence of the private self and challenges to the authority of society. Civilization is a mechanism to nudge us out of old habits. There would be no modernity without a triumphant self.So while a world of total surveillance seems inevitable, we don’t know if such a mode will nurture a strong sense of self, which is the engine of innovation and creativity — and thus all future progress. How would an individual maintain the boundaries of self when their every thought, utterance, and action is captured, archived, analyzed, and eventually anticipated by others?The self forged by previous centuries will no longer suffice. We are now remaking the self with technology. We’ve broadened our circle of empathy, from clan to race, race to species, and soon beyond that. We’ve extended our bodies and minds with tools and hardware. We are now expanding our self by inhabiting virtual spaces, linking up to billions of other minds, and trillions of other mechanical intelligences. We are wider than we were, and as we offload our memories to infinite machines, deeper in some ways.

There’s no point asking Kelly for details. (“The self forged by previous centuries will no longer suffice” for what? Have we really “broadened our circle of empathy”? What are are we “wider” and “deeper” than, exactly? And what does that mean?) This is not an argument. It is, like Shaw’s “New Theology,” a sermon, directed primarily towards those who already believe and secondarily to sympathetic waverers, the ones with a tiny shred of conscience troubling them about the universal surveillance state whose arrival Kelly awaits so breathlessly. Those who would resist need not be addressed because they’re on their way to — let’s see, what’s that phrase? — ah yes: the “dustbin of history.” Now, someone might protest at this point that I am not being fair to Kelly. After all, he does say that a one-way surveillance state, in which ordinary people are seen but do not see, would be “hell”; and he even says “A massively surveilled world is not a world I would design (or even desire), but massive surveillance is coming either way because that is the bias of digital technology and we might as well surveil well and civilly.” Let’s pause for a moment to note the reappearance of the Borg here, and Kelly’s habitual offloading of responsibility from human beings to our tools: for Woody Allen, “the heart wants what it wants” but for Kelly technology wants what it wants, and such sovereign beings always get their way. But more important, notice here that Kelly thinks it’s a simple choice to decide on two-way surveillance: we “might as well.” He admits that the omnipotent surveillance state would be hell but he obviously doesn’t think that hell has even the remotest chance of happening. Why is he so confident? Because he shares Shaw’s belief in an evolutionary religion in which all that is true and good and holy emerges in history as the result of an inevitably beneficent process. Why should we worry about possible future constrictions of selfhood when the track record of “modernity” is, says Kelly, so utterly spotless, with its “glories of progress” and its “triumphant self”? I mean, it’s not as though modernity had a dark side or anything. All the arrows point skyward. So: why worry? The only difference between Shaw and Kelly in this respect is that for Shaw the emerging paradisal “ecstasy of a brain” is a human brain; for Kelly it’s digital. Kelly has just identified digital technology as the means by which Shaw’s evolutionary progressivist Utopia will be realized. But what else is new? The rich, powerful, and well-connected always think that they and people like them (a) will end up on the right side of history and (b) will be insulated from harm — which is after all what really counts. Kelly begins his essay thus: “I once worked with Steven Spielberg on the development of Minority Report” — a lovely opener, since it simultaneously allows Kelly to boast about his connections in the film world and to dismiss Philip K. Dick’s dystopian vision as needlessly fretful. When the pre-cog system comes, it won’t be able to hurt anyone who really matters. So let’s just cue up Donald Fagen one more time and get down to the business of learning to desire whatever it is that technology wants. The one remaining spiritual discipline in Kelly’s theology is learning to love Big Brother. 

History, 9/11 Relics, and “Technological Superstition”

Isn’t it strange how this castle changes as soon as one imagines that Hamlet lived here? As scientists we believe that a castle consists only of stones, and admire the way the architect put them together.

—Niels Bohr, to Werner Heisenberg, at Kronborg Castle

Kevin Kelly recently declared that most of the value we place on historical artifacts is a matter of mere “technological superstition.” Beginning with artifacts from the September 11 attack sites, and continuing to Ernest Hemingway’s typewriter, home-run baseballs, and the pen used to sign the Declaration of Independence, Kelly claims that we preserve, collect, and pay great sums for these objects because we believe they are akin to religious relics that confer supernatural or magical powers.(flickr/aturkus)Now, I could see Kelly’s point if people were preserving 9/11 rubble because they thought that tossing it over one’s shoulder would ward off evil spirits, or were buying Hemingway’s typewriter because they thought rubbing one’s temples upon it would help one get a story into McSweeney’s. But as far as I know, no one believes, or is saying, any such thing. In fact, Kelly’s own argument suggests something rather different.The main elements of Kelly’s argument seem to be: (1) The supposed “specialness” of an artifact does not reside in the artifact itself, cannot be measured by scientific instrumentation; it is thus superstitious. (2) An artifact’s supposed “specialness originates in the same way as an ancient relic — because someone says so.” This is why people who value artifacts are so interested in provenance — documentation or evidence to establish that the artifact actually has the historical connection it is supposed to. (3) There are only two legitimate, non-superstitious reasons to value particular historical objects: age and rarity. (Kelly makes parts of this last point in the comments section beneath his original post.)

Hemingway’s typewriter and binoculars, at his home in Ketchum, Idaho (US Plan B)

A variety of immediate problems arise. The idea that an artifact’s uniqueness cannot be measured empirically is simply not true in the examples Kelly has provided. His prime example is Hemingway’s typewriter, which is supposedly physically identical to every other typewriter of the same model. Except it isn’t. Hemingway owned it, so, for example, it presumably has bits of his skin cells and hair lodged in it. It is chemically unique: a forensic scientist needing to obtain Hemingway’s DNA might examine this typewriter, but would not examine any other instance of that model.Kelly’s point (2) is trying desperately to eat its own tail — more on that in a moment. And on point (3), age is not a property that resides in an object (even if evidence of it sometimes does) and rarity most certainly does not reside in an object. If a home-run baseball becomes sufficiently old, or other baseballs of the same model are destroyed so that it becomes rare, why can we now value it? Nothing residing in the ball itself has changed.Putting these problems aside for now, it seems that Kelly wants us to value objects only inasmuch as they yield information, in particular scientific information. Scientific theories are interested in universals and types, not particulars and instances. A lab rat is useful because we can manipulate it and perform tests upon it to verify or falsify theories. But the particular rat has no scientific value beyond its membership in a class. This is because science is especially interested in studying repeatable events — events whose existence is, paradoxically, not bound to a particular time or place. It would be superstitious to scientifically value any particular rat, because the future will always yield more rats.The problem is that the reason people value historical artifacts is quite different from the reason they value objects that are useful for forming and validating scientific theories. In both cases, the central task (if not the ultimate goal) involves learning empirical facts about the world. But where scientific facts are repeatable, available for verification by anyone anywhere, a historical event happens only once, and then is gone. (The two qualities that Kelly concedes might make an artifact legitimately valuable — age and rarity — are in fact only valuable in a historical sense; their value seems scientific simply because it can be quantified.)This is the rub of history: we can’t go back and see it again for ourselves, because it already happened. So we tell stories, and we remember. But we worry that we will forget; and we worry that the next generation will not believe us — or that they will believe, but not feel, because it didn’t exist for them as it did for us. Perhaps we worry that, after enough time, even things that happened to us, and people we knew, will begin to seem less real — because even for us they don’t exist now as they once did.

World Trade Center rubble (via Daily Mail)

And so we demand tangible, physical evidence that history actually happened. Ernest Hemingway is just a name on a book; the closest we can come to experiencing and verifying the real existence of the historical person is standing in his study, touching his typewriter. It becomes easy for those of us who were not living in New York or D.C. or Shanksville, and especially for the children too young to remember, to disbelieve the events of 9/11 on some level — to think it really was just a movie that played out on TV. Left, the wedding ring worn by Bryan Jack, a passenger on the plane that crashed into the Pentagon. Right, his wife’s ring. (From a New York Times story on 9/11 relics.)It is easier to believe and feel the weight of it when one sees the hole in the ground, or holds a piece of twisted metal.Kelly notes in a comment that we may value a watch that belonged to our father or a necklace that belonged to our mother because it has some “intangible, spiritual, ineffable quality that would be absent in another unit.” But there is nothing ineffable about it: the watch belonged to our father, the necklace to our mother, while the others did not. These are hard, empirical facts — nothing superstitious or supernatural about them. And the objection that a historical fact does not reside in an object is backwards: the whole point is that it was the object that resided in history.But the curious thing about artifacts is not just that they reside in events, but that they also reside outside of events, becoming altered by them but persisting beyond them. Artifacts are the precipitations of history. They form a bridge between the past and the present in a way that our own transience and finitude cannot. This is why we are interested in artifacts, and especially in their provenance: not because we value authority as proof of history, but just the opposite, so that we can step beyond taking other peoples’ word, and get as close as possible to personal knowledge of history — of events that happened and people that lived, but are forever gone.

The enduring is something which must be accounted for. One cannot simply shrug it off.

—Walker Percy

At Ground Zero in New York now stands the National September 11 Memorial, built around the footprints of the Twin Towers. If we are to take Kelly’s argument seriously, then the design, even existence of this memorial is a travesty, a voodoo incantation to nothing. Why does it preserve the footprints of the towers — the space around objects that do not exist, in which nothing now resides because they reside in nothing? Why, indeed, is the memorial located at Ground Zero — which is not especially old, and surely cannot, especially now that the memorial is built over it, yield much new empirical information? Why is it built where the events actually happened and not in some other part of Manhattan — or, for that matter, in Trenton or Boise or São Paulo? Why do we remember at all?Beware what is afoot when someone comes crying that he has shined the brightest of lights on human affairs, and found that he cannot see in it something everyone else does. There is a good chance he has simply blinded himself.


The footprint of one of the World Trade Center buildings (Mary Altaffer/AP, via The New York Times)

a theological interlude

Theology is very important to me: it’s central to my life and to much of my work, though I don’t say much about it on this blog. However, I do have a comment about this quasi-theological conversation between Kevin Kelly and Nick Carr: I think I would want to disagree with KK at an earlier stage in the debate than where Nick picks it up.

I take KK’s core assertion to be this: Technology is a (the?) chief means by which God now intervenes in history to help people to realize their full potential. My problem with that assertion starts long before we get to the question of what technology does (or doesn’t do) to make our lives better (or worse). KK’s planted axiom, as the logicians used to say, is that common beliefs about what counts as “potential” and what counts as “fulfilling” that potential are perfectly adequate, and that God’s job in the universe is ancillary, i. e., to help us along a path that we already see pretty clearly.

I don’t believe any of that. I don’t think that, left to our own devices, people have a very good idea of what human flourishing, eudaimonia, really is; and I don’t think of God as a celestial helpmeet, an omnipotent enabler of our desires. My theology starts, more or less, with the message Dietrich Bonhoeffer articulated most succinctly: “When Christ calls a man, he bids him come and die.” And that means dying to our pre-existing understanding of what our potential is and what realizing it would mean.

Now, I believe that whatever dies in Christ will be reborn in him — but, as T. S. Eliot put it, will “become renewed, transfigured, in another pattern.” And from that vantage point everything will look different. As far as I can tell, in KK’s theology the life of Francis of Assisi was deficient in potential, in choices, was impoverished in a deep sense — and yet Francis believed that by embracing Lady Poverty, by casting aside his wealth and intentionally limiting his choices, he found riches he could not have found in any other way. This is, I hope, not to romanticize material poverty, or to say that we would all be better off if we lived in the Middle Ages. I disagree strongly with such nostalgia. But I think the example of Francis suggests that we cannot simply equate choices and riches in the material realm with human flourishing. The divine economy is far more complicated than that, and any serious theology of technology has to begin, I think, by acknowledging that point.

life skills for the Technium

I have sometimes, in these pages of pixels, expressed frustration with Kevin Kelly, but his post on “Techno Life Skills” is just fantastic. My favorite point:

You will be newbie forever. Get good at the beginner mode, learning new programs, asking dumb questions, making stupid mistakes, soliciting help, and helping others with what you learn (the best way to learn yourself).

I’m going to give this to all my students for the foreseeable future.

broken

Kevin Kelly:

My friend had a young daughter under 5 years old. Like many other families these days, they have no tv in their house, but do have has lots of computers. With his daughter he was visiting another family who had a tv, which was on in another room. The daughter went up to the tv, hunting around it, and looked behind the tv. “Where’s the mouse?” she asked. 

Another friend had a barely-speaking toddler take over his iPad. She could paint and handle complicated tasks on apps with ease and grace almost before she could walk. It is now sort of her iPad. One day he printed out a high resolution image on photo paper and left it on the coffee table. He noticed his toddler come up to up and try to unpinch the photo to make it larger, like you do on an iPad. She tried it a few times, without success, and looked over to him and said “broken.” 

Another acquaintance told me this story. He has a son about 8 years old. They were talking about the old days, and the fact that when my friend was growing up they did not have computers. This fact was perplexing news to his son. His son asks, “But how did you get onto the internet before computers?”

Kelly says he draws two lessons from these stories: “if something is not interactive, with mouse or gestures, it is broken. And, the internet is not about computers or devices; it is something mythic, something much larger; it is about humanity.” I have no idea what that second sentence means, but as to the first one, I wonder: is Picasso’s “Guernica” broken? Is an old leather-bound copy of Shakespeare’s sonnets broken?

Cute and interesting stories, but what they mainly tell is that children are amazing generalizers: they make a wide range of assumptions based on their experiential history. Some of those turn out to be true, some not so true.

Enough! or, Too much!

Kevin Kelly:

Today some 4.5 billion digital screens illuminate our lives. Words have migrated from wood pulp to pixels on computers, phones, laptops, game consoles, televisions, billboards and tablets. Letters are no longer fixed in black ink on paper, but flitter on a glass surface in a rainbow of colors as fast as our eyes can blink. Screens fill our pockets, briefcases, dashboards, living room walls and the sides of buildings. They sit in front of us when we work — regardless of what we do. We are now people of the screen. And of course, these newly ubiquitous screens have changed how we read and write.

I’ve said this before, ad nauseam no doubt, but: please. There is no such thing as “the screen.” A laptop screen is not a TV screen is not a movie screen is not an iPad screen is not a Kindle screen. They’re all different, and we experience them in significantly different ways. And “letters are no longer fixed in black ink on paper”? Really? All these books and magazines and newspapers and memoranda that I encounter every day are figments of my imagination?

Please, Kevin, stop it. Just stop it. Lose the oracular pronouncements and think about what you’re saying.

technology and homeschooling

I tend to get frustrated by Kevin Kelly’s technophilia, but this account of his experiences teaching his son at home (a) resonates with my own homeschooling adventures and (b) makes a ton of sense. I especially like this set of principles about technology that he and his wife tried to impart to their eighth-grader:

  • Every new technology will bite back. The more powerful its gifts, the more powerfully it can be abused. Look for its costs.
  • Technologies improve so fast you should postpone getting anything you need until the last second. Get comfortable with the fact that anything you buy is already obsolete.
  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • Be suspicious of any technology that requires walls. If you can fix it, modify it or hack it yourself, that is a good sign.
  • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
  • Every technology is biased by its embedded defaults: what does it assume?
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

The last two are particularly noteworthy, and wise also. And this is best of all: “Technology helped us learn, but it was not the medium of learning. It was summoned when needed. Technology is strange that way. Education, at least in the K-12 range, is more about child rearing than knowledge acquisition. And since child rearing is primarily about forming character, instilling values and cultivating habits, it may be the last area to be directly augmented by technology.”

heed the Marxist critique

Kevin Kelly has a book coming out soon called What Technology Wants. Kevin, meet Leo Marx:

We amplify the hazardous character of the concept by investing it with agency — by using the word technology as the subject of active verbs. Take, for example, a stock historical generalization such as: “the cotton-picking machine transformed the southern agricultural economy and set off the Great Migration of black farm workers to northern cities.” Here we tacitly invest a machine with the power to initiate change, as if it were capable of altering the course of events, of history itself. By treating these inanimate objects — machines — as causal agents, we divert attention from the human (especially socioeconomic and political) relations responsible for precipitating this social upheaval. Contemporary discourse, private and public, is filled with hackneyed vignettes of technologically activated social change — pithy accounts of “the direction technology is taking us” or “changing our lives.”. . . To attribute specific events or social developments to the historical agency of so basic an aspect of human behavior makes little or no sense. Technology, as such, makes nothing happen. By now, however, the concept has been endowed with a thing-like autonomy and a seemingly magical power of historical agency. We have made it an all-purpose agent of change. As compared with other means of reaching our social goals, the technological has come to seem the most feasible, practical, and economically viable. It relieves the citizenry of onerous decision-making obligations and intensifies their gathering sense of political impotence. The popular belief in technology as a — if not the — primary force shaping the future is matched by our increasing reliance on instrumental standards of judgment, and a corresponding neglect of moral and political standards, in making judgments about the direction of society. To expose the hazards embodied in this pivotal concept is a vital responsibility of historians of technology.