things and creatures, conscience and personhood

Yesterday I read Jeff VanderMeer’s creepy, disturbing, uncanny, and somehow heart-warming new novel Borne, and it has prompted two sets of thoughts that may or may not be related to one another. But hey, this is a blog: incoherence is its birthright. So here goes.

1.

A few months ago I wrote a post in which I quoted this passage from a 1984 essay by Thomas Pynchon:

If our world survives, the next great challenge to watch out for will come — you heard it here first — when the curves of research and development in artificial intelligence, molecular biology and robotics all converge. Oboy. It will be amazing and unpredictable, and even the biggest of brass, let us devoutly hope, are going to be caught flat-footed. It is certainly something for all good Luddites to look forward to if, God willing, we should live so long.

If you look at the rest of the essay, you’ll see that Pynchon thinks certain technological developments could be embraced by Luddites because the point of Luddism is not to reject technology but to empower common people in ways that emancipate them from the dictates of the Capitalism of the One Percent.

But why think that future technologies will not be fully under the control of the “biggest of brass”? It is significant that Pynchon points to the convergence of “artificial intelligence, molecular biology and robotics” — which certainly sounds like he’s thinking of the creation of androids: humanoid robots, biologically rather than mechanically engineered. Is the hope, then, that such beings would become not just cognitively but morally independent of their makers?

Something like this is the scenario of Borne, though the intelligent being is not humanoid in either shape or consciousness. One of the best things about the book is how it portrays a possible, though necessarily limited, fellowship between humans and fundamentally alien (in the sense of otherness, not from-another-planet) sentient beings. And what enables that fellowship, in this case, is the fact that the utterly alien being is reared and taught from “infancy” by a human being — and therefore, it seems, could have become something rather though not totally different if a human being with other inclinations had done the rearing. The story thus revisits the old nature/nurture question in defamiliarizing and powerful ways.

The origins of the creature Borne are mysterious, though bits of the story are eventually revealed. He — the human who finds Borne chooses the pronoun — seems to have been engineered for extreme plasticity of form and function, a near-total adaptability that is enabled by what I will call, with necessary vagueness, powers of absorption. But a being so physiologically and cognitively flexible simply will not exhibit predictable behavior. And therefore one can imagine circumstances in which such a being could take a path rather different than that chosen for him by his makers; and one can imagine that different path being directed by something like conscience. Perhaps this is where Luddites might place their hopes for the convergence of “artificial intelligence, molecular biology and robotics”: in the arising from that convergence of technology with a conscience.

2. 

Here is the first sentence of Adam Roberts’s novel Bête:

As I raised the bolt-gun to its head the cow said: ‘Won’t you at least Turing-test me, Graham?’

If becoming a cyborg is a kind of reaching down into the realm of the inanimate for resources to supplement the deficiencies inherent in being made of meat, what do we call this reaching up? — this cognitive enhancement of made objects and creatures until they become in certain troubling ways indistinguishable from us? Or do we think of the designing of intelligent machines, even spiritual machines, as a fundamentally different project than the cognitive enhancement of animals? In Borne these kinds of experiments — and others that involve the turning of humans into beasts — are collectively called “biotech.” I would prefer, as a general term, the one used in China Miéville’s fabulous novel Embassytown: “biorigging,” a term that connotes complex design, ingenuity, and a degree of making-it-up-as-we-go-along. Such biorigging encompasses every kind of genetic modification but also the combining in a single organism or thing biological components with more conventionally technological ones, the animate and the inanimate. It strikes me that we need a more detailed anatomy of these processes — more splitting, less lumping.

In any case, what both VanderMeer’s Borne and Roberts’s Bête do is describe a future (far future in one case, near in the other) in which human beings live permanently in an uncanny valley, where the boundaries between the human and the nonhuman are never erased but never quite fixed either, so that anxiety over these matters is woven into the texture of everyday experience. Which sounds exhausting. And if VanderMeer is right, then the management of this anxiety will become focused not on the unanswerable questions of what is or is not human, but rather on a slightly but profoundly different question: What is a person?

fleshers and intelligences

I’m not a great fan of Kevin Kelly’s brand of futurism, but this is a great essay by him on the problems that arise when thinking about artificial intelligence begins with what the Marxists used to call “false reification”: the belief that intelligence is a bounded and unified concept that functions like a thing. Or, to put Kelly’s point a different way, it is an error to think that human beings exhibit a “general purpose intelligence” and therefore an error to expect that artificial intelligences will do the same.

Kelly opposes to this reifying orthodoxy in AI efforts five affirmations of his own:

  1. Intelligence is not a single dimension, so “smarter than humans” is a meaningless concept.
  2. Humans do not have general purpose minds, and neither will AIs.
  3. Emulation of human thinking in other media will be constrained by cost.
  4. Dimensions of intelligence are not infinite.
  5. Intelligences are only one factor in progress.

Expanding on that first point, Kelly writes,

Intelligence is not a single dimension. It is a complex of many types and modes of cognition, each one a continuum. Let’s take the very simple task of measuring animal intelligence. If intelligence were a single dimension we should be able to arrange the intelligences of a parrot, a dolphin, a horse, a squirrel, an octopus, a blue whale, a cat, and a gorilla in the correct ascending order in a line. We currently have no scientific evidence of such a line. One reason might be that there is no difference between animal intelligences, but we don’t see that either. Zoology is full of remarkable differences in how animals think. But maybe they all have the same relative “general intelligence?” It could be, but we have no measurement, no single metric for that intelligence. Instead we have many different metrics for many different types of cognition.

Think, to take just one example, of the acuity with which dogs observe and respond to a wide range of human behavior: they attend to tone of voice, facial expression, gesture, even subtle forms of body language, in ways that animals invariably ranked higher on what Kelly calls the “mythical ladder” of intelligence (chimpanzees, for instance) are wholly incapable of. But dogs couldn’t begin to use tools the way that many birds, especially corvids, can. So what’s more intelligent, a dog or a crow or a chimp? It’s not really a meaningful question. Crows and dogs and chimps are equally well adapted to their ecological niches, but in very different ways that call forth very different cognitive abilities.

If Kelly is right in his argument, then AI research is going to be hamstrung by its commitment to g or “general intelligence,” and will only be able to produce really interesting and surprising intelligences when it abandons the idea, as Stephen Jay Gould puts is in his flawed but still-valuable The Mismeasure of Man, that “intelligence can be meaningfully abstracted as a single number capable of ranking all people [including digital beings!] on a linear scale of intrinsic and unalterable mental worth.”

“Mental worth” is a key phrase here, because a commitment to g has been historically associated with explicit scales of personal value and commitment to social policies based on those scales. (There is of course no logical link between the two commitments.) Thus the argument frequently made by eugenicists a century ago that those who score below a certain level on IQ tests — tests purporting to measure g — should be forcibly sterilized. Or Peter Singer’s view that he and his wife would be morally justified in aborting a Down syndrome child simply because such a child would probably grow up to be a person “with whom I could expect to have conversations about only a limited range of topics,” which “would greatly reduce my joy in raising my child and watching him or her develop.” A moment’s reflection should be sufficient to dismantle the notion that there is a strong correlation between, on the one hand, intellectual agility and verbal fluency and, on the other, moral excellence; which should also undermine Singer’s belief that a child who is deficient in his imagined general intelligence is ipso facto a person he couldn’t “treat as an equal.” But Singer never gets to that moment of reflection because his rigid and falsely reified model of intellectual ability, and the relations between intellectual ability and personal value, disables his critical faculties.

If what Gould in another context called the belief that intelligence is “an immutable thing in the head” which allows “grading human beings on a single scale of general capacity” is both erroneous and pernicious, it is somewhat disturbing to see that belief not only continuing to flourish in some communities of discourse but also being extended into the realm of artificial intelligence. If digital machines are deemed superior to human beings in g, and if superiority in g equals greater intrinsic worth…. well, the long-term prospects what what Greg Egan calls “fleshers” aren’t great. Unless you’re one of the fleshers who controls the machines. For now.

P.S. I should add that I know that people who are good at certain cognitive tasks tend to be good at other cognitive tasks, and also that, as Freddie DeBoer points out here, IQ tests — that is, tests of general intelligence — have predictive power in a range of social contexts, but I don’t think any of that undermines the points I’m making above. Happy to be corrected where necessary, of course.

the masterful diptych of Coroger Zelaznorow

It’s been very interesting for me to re-read — for the first time in 40 years, so who am I kidding, let’s just say read — Roger Zelazny’s Lord of Light. It’s a wonderful book, and I am especially pleased that I got to it just after reading Cory Doctorow’s new novel Walkaway. Doctorow’s book has some good points, but I wasn’t a big fan — I felt it left too many important questions unasked — until I realized something: Lord of Light, though written fifty years ago, is actually the sequel to Walkaway. And if you think of the two books as a diptych, the first installment gets a lot more interesting. Let me explain, with many spoilers.

Roughly in the middle of Doctorow’s novel “walkaway” scientists – that is, scientists who have gone off the standard panoptic grid of our world, the “default” world, have headed out into the wilderness to live in anarchic community — figure out how to upload human consciousness to digital form and then reconstitute that consciousness. Which means, at least according to one way of thinking, the way of thinking that Doctorow allows to dominate the book, the end of the reign of death.

The chief conflict of the book, then, pits the scientists who want to share this power with everyone against the capitalist one-percenters of “default,” who want to keep it for themselves – partly because they think that scarcity creates value and they are the Lords of Value, but also because they control the 99% by making them afraid of bodily harm and death. (As David Graeber wrote a decade or so ago, our whole social order is upheld by the threat of violence against bodies, and Walkaway is essentially Graeberian political philosophy in novelized form.)

Eventually the good guys win out, and immortality-via-upload becomes widely available – but it turns out that those minds miss being embodied, and scientists somehow find a way to grow bodies and reanimate them by injecting them with consciousness. Or something. (There aren’t a lot of details.)

The story begins less than a century from now, and ends not too much later. So let’s fast-forward a few thousand years, and imagine that Earth has died or been destroyed but humanity has spread elsewhere in the galaxy. And on at least one of the planets our descendants colonize, control of immortality has been seized by a tiny few. It turns out that, for them, simply being immortal – or, if that’s the wrong word, simply having access to recurrent embodiment – isn’t enough. Welcome to the world of Lord of Light, or, as I prefer to think of it, Walkaway: The Sequel.

The attentive reader of both books will notice that one difference between the two is that in Lord of Light human minds are no longer uploaded to the cloud, stored on a networked server, but are simply transferred from one body to another. Our author, Coroger Zelaznorow, doesn’t explain this, but it’s easy to understood what must have happened in the intervening centuries. Already in Walkaway we see the disturbances that arise when more than one living instance of the same, or “same,” person is around; those disturbances surely would have been magnified as downloading became more widespread, not least among megalomaniacs who wanted to see themselves as widely distributed in the world as possible. Moreover, minds uploaded to networked servers would have found themselves subject to to the experiments, relatively benign or deeply malicious, of hackers. In the end, it seems clear, the protocols of transfer were deemed safer, more reliable, and less subject to abuse than the protocols of uploading/downloading. Perhaps someday Zelaznorow will write a novel about this period of transition between the two ways of making us immortal: the Networked Way and the Way of Transference.

Now, it will immediately be seen that the Way of Transference introduces a complication: if your consciousness is not uploaded to a supposedly safe location, then if you are murdered or have a fatal accident you, as they say in Lord of Light, “die the real death.” But is this a bug or a feature? Might it not be that many who have lived a very long time, in multiple bodies, learn that death is indeed the mother of beauty? We might here compare Iain M. Banks’s Culture books, in which, as Banks himself explained, most people live a few hundred years and then accept death. Not all — some choose the Networked Way and get themselves downloaded immediately or after a period of sleep — but most. It seems likely that to the potential immortals of Lord of Light the possibility of the “real death” is another reason for preferring the Way of Transference.

Over time, the controllers of any given culture learn how technologies work, decide which potentialities are to be embraced and which resisted, tune their employment of those technologies to their larger purposes. In Walkaway the controllers, the capitalist one-percenters, want to keep immortality for themselves, but by the time of Lord of Light the strategy has become more complex.

If Walkaway offers its readers a straightforward and apparently simplistic victory for sharing — for acting on the assumption of abundance rather than scarcity — Lord of Light shrewdly and usefully complicates the situation by showing that even if sharing wins in one place and time it may not do so always and everywhere. The Lords of Karma, as they call themselves, have discovered the virtues of control that is not based on exclusive possession. They do not want to keep immortality only for themselves; they want to share it; but they want to exercise precise control over that sharing.

And it turns out that the ideal structure to enable what they want is that of traditional Hinduism. Within a social order aligned to the Hindu cosmos, they can be gods, each of whom “rules through [his or her] ruling passions,” as one of them says, achieving and enacting the apotheosis of that passion. And by controlling Transference, they can punish those they deem wicked by re-incarnating them in an inferior body, perhaps that of an animal — you can never be sure in this world that a dog is merely a dog —, and reward those whom they deem virtuous by elevating their status, incarnation by incarnation, raising them up to become demigods and then, ultimately, gods. (Of course, employing the time-honored logic of colonial powers they say that they are merely withholding blessedness from those who are not yet ready for it.) And only those who have fully internalized the ethos of the Lords of Karma will be allowed to join that pantheon. The world is governed, then, by a self-perpetuating oligarchy which must occasionally refresh itself, if only because over the centuries some will inevitably “die the real death.”

And a world so ordered is one in which the Lords of Karma are gods not just because they are (probably) immortal and (certainly) immensely powerful, but also because they can compel worship. The capitalists of Walkaway manifest a craving for mere power that would be annoyingly simplistic if the book stood alone; but when we understand that it is the first book of a diptych then we see that it describes a fairly early stage in the history of oligarchy, and that later stages make progress by a kind of ressourcement. The unspoken motto of the Lords of Karma is: Ad fontes! And the fontes to which they return are those of religion. They receive worship, and they gratify the desire of many human beings to find something or someone to worship. And by reliably granting ascent to those who satisfy their demands, they create an orderly, coherent, and logical system — a system which constitutes a powerful myth, and, as Freddie deBoer recently commented in an essay which only superficially seems to have little in common with this one, “the human animal runs on myth.”

Only a great scoundrel would seek to disrupt so peacefully disciplined a world. Or a great saint. Or someone who is a bit of both.

When I read Walkaway I was disappointed by its limited exploration of the ethics of immortality, and the complete lack of interest in metaphysics. (There is no myth in Walkaway: the place of myth is taken by 3D printers.)  The book elides vital questions simply by treating the reconstituted minds as the very same characters whom we have come to know, as the other characters themselves do. There are bits of desultory conversation about the continuity of identity via digital representation, but the narrative simply doesn’t allow us to take seriously the possibility that such representations could be deceptive and that the characters for whom we have come to have affection have in fact “died the real death.” It is only when reading Lord of Light that we see how Zelaznorow calls into question the narrative assumptions of Walkaway.

Similarly, in Walkaway our characters mainly want to stay alive, to enjoy one another’s company, to feel useful — they don’t inquire any further into life’s possible meanings, its ultimate values, what Robert Pirsig (God bless his soul) called Quality. But all these lacunae turn out not to be oversights but rather a clever suspending of certain questions so that they can be explored more fully in the sequel. That Zelaznorow is a genius. But you can only see that if you read the sequel. Reading Walkaway alone might be an underwhelming experience.

Ethical questions and frivolous consciences

Our Futurisms colleague Charlie Rubin had a smart, short piece over on the Huffington Post a couple weeks ago called “We Need To Do More Than Just Point to Ethical Questions About Artificial Intelligence.” Responding to the recent (and much ballyhooed) “open letter” about artificial intelligence published by the Future of Life Institute, Professor Rubin writes:

One might think that such vagueness is just the result of a desire to draft a letter that a large number of people might be willing to sign on to. Yet in fact, the combination of gesturing towards what are usually called “important ethical issues,” while steadfastly putting off serious discussion of them, is pretty typical in our technology debates. We do not live in a time that gives much real thought to ethics, despite the many challenges you might think would call for it. We are hamstrung by a certain pervasive moral relativism, a sense that when you get right down to it, our “values” are purely subjective and, as such, really beyond any kind of rational discourse. Like “religion,” they are better left un-discussed in polite company….

No one doubts that the world is changing and changing rapidly. Organizations that want to work towards making change happen for the better will need to do much more than point piously at “important ethical questions.”

This is an excellent point. I can’t count how many bioethics talks I have heard over the years that just raise questions without attempting to answer them. It seems like some folks in bioethics have made their whole careers out of such chin-scratching.

And not only is raising ethical questions easier than answering them, but (as Professor Rubin notes) it can also be a potent rhetorical tactic, serving as a substitute for real ethical debate. When an ethically dubious activity attracts attention from critics, people who support that activity sometimes allude to the need for a debate about ethics and policy, and then act as though calling for an ethical debate is itself an ethical debate. It’s a way of treating ethical problems as obstacles to progress that need to be gotten around rather than as legitimate reasons not to do the ethically dubious thing.

Professor Rubin’s sharp critique of the “questioning” pose reminds me of a line from Paul Ramsey, the great bioethicist:

We need to raise the ethical questions with a serious and not a frivolous conscience. A man of frivolous conscience announces that there are ethical quandaries ahead that we must urgently consider before the future catches up with us. By this he often means that we need to devise a new ethics that will provide the rationalization for doing in the future what men are bound to do because of new actions and interventions science will have made possible. In contrast, a man of serious conscience means to say in raising urgent ethical questions that there may be some things that men should never do. The good things that men do can be made complete only by the things they refuse to do. [from pages 122–123 of Ramsey’s 1970 book Fabricated Man]

How many of the signers of the Future of Life Institute open letter, I wonder, are men and women of frivolous conscience?

(Hat-tip to our colleague Brendan P. Foht, who brought the Ramsey passage to our attention in the office.)

on Adam Roberts’s Bête

This is a book about the difference between being a butcher and being a murderer, if there is a difference

This is a book about the fungibility of identity

This is a book about the persistence of identity

This is a book about the relationship between identity and body

This is a book about the unforeseen consequences of technology

This is a book about some of the ways in which slick talk about the “posthuman” is vacuous

This is a book about the uses and abuses of the concept of species

This is a book about the Smiths’ song “Meat is Murder”

This is a book about all the stories that have talking animals

This is a book about Wittgenstein’s claim that “If a lion could talk, we would not understand him”

This is a book about what happens to carnivores when they become reflective about being carnivores

This is a book about Oedipus and the riddle of the Sphinx

This is a book about what Animal Farm would be like if it weren’t a parable

This is a book about what we think is below us and what we think might be above us in the Great Chain of Being

This is a book about what the Turing test can’t do

This is a book about the problem of other minds

simplification where it doesn’t belong

This is not a topic to which I can do justice in a single post, or even, I expect, a series of posts, but let me make this a placeholder and a promise of more to come. I want to register a general and vigorous protest against thought-experiments of the Turing test and Chinese room variety. These two experiments are specific to debates about intelligence (natural or artificial) and consciousness (ditto), but may also be understood as subsets of a much larger category of what we might call veil-of-ignorance strategies. These strategies, in turn, imitate the algebraic simplification of expressions.

The common method here goes something like this: when faced with a tricky philosophical problem, it’s useful to strip away all the irrelevant contextual details so as to isolate the key issues involved, which then, so isolated, will be easier to analyze. The essential problem with this method is its assumption that we know in advance which elements of a complex problem are essential and which are extraneous. But we rarely know that; indeed, we can only know that if we have already made significant progress towards solving our problem. So in “simplifying” our choices by taking an enormous complex of knowledge — the broad range of knowledge that we bring to all of our everyday decisions — and placing almost all of it behind a veil of ignorance, we may well be creating a situation so artificially reductive that it tells us nothing at all about the subjects we’re inquiring into. Moreover, we are likely to be eliminating not just what we explicitly know but also the tacit knowledge whose vital importance to our cognitive experience Michael Polanyi has so eloquently emphasized.

By contrast to the veil-of-ignorance approach, consider its near-opposite, the approach to logic and argumentation developed by Stephen Toulmin in his The Uses of Argument. For Toulmin, the problem with most traditional approaches to logic is this very tendency to simplification I’ve been discussing — a simplification that can produce, paradoxically enough, its own unexpected complications and subtleties. Toulmin says that by the middle of the twentieth century formal philosophical logic had become unfortunately disconnected from what Aristotle had been interested in: “claims and conclusions of a kind that anyone might have occasion to make.” Toulmin comments that “it may be surprising to find how little progress has been made in our understanding of the answers in all the centuries since the birth, with Aristotle, of the science of logic.”

So Toulmin sets out to provide an account of how, in ordinary life as well as in philosophical discourse, arguments are actually made and actually received. Aristotle had in one sense set us off on the wrong foot by seeking to make logic a “formal science — an episteme.” This led in turn, and eventually, to attempts to make logic a matter of purely formal mathematical rigor. But to follow this model is to abstract arguments completely out of the lifeworld in which they take place, and leave us nothing to say about the everyday debates that shape our experience. Toulmin opts instead for a “jurisprudential analogy”: a claim that we evaluate arguments in the same complex, nuanced, and multivalent way that evidence is weighed in law. When we evaluate arguments in this way we don’t get to begin by ruling very much out of bounds: many different kinds of evidence remain in play, and we just have to figure out how we see them in relation to one another. Thus Toulmin re-thinks “the uses of argument” and what counts as responsible evaluation of the arguments that we regularly confront.

It seems to me that when we try to understand intelligence and consciousness we need to imitate Toulmin’s strategy, and that if we don’t we are likely to trivialize and reduce human beings, and the human lifeworld, in pernicious ways. It’s for this reason that I would like to call for an end to simplifying thought experiments. (Not that anyone will listen.)

So: more about all this in future posts, with reflections on Mark Halpern’s 2006 essay on “The Trouble with the Turing Test”.

testing intelligence — or testing nothing?

Tim Wu suggests an experiment:

A well-educated time traveller from 1914 enters a room divided in half by a curtain. A scientist tells him that his task is to ascertain the intelligence of whoever is on the other side of the curtain by asking whatever questions he pleases.

The traveller’s queries are answered by a voice with an accent that he does not recognize (twenty-first-century American English). The woman on the other side of the curtain has an extraordinary memory. She can, without much delay, recite any passage from the Bible or Shakespeare. Her arithmetic skills are astonishing — difficult problems are solved in seconds. She is also able to speak many foreign languages, though her pronunciation is odd. Most impressive, perhaps, is her ability to describe almost any part of the Earth in great detail, as though she is viewing it from the sky. She is also proficient at connecting seemingly random concepts, and when the traveller asks her a question like “How can God be both good and omnipotent?” she can provide complex theoretical answers.

Based on this modified Turing test, our time traveller would conclude that, in the past century, the human race achieved a new level of superintelligence. Using lingo unavailable in 1914, (it was coined later by John von Neumann) he might conclude that the human race had reached a “singularity” — a point where it had gained an intelligence beyond the understanding of the 1914 mind.

The woman behind the curtain, is, of course, just one of us. That is to say, she is a regular human who has augmented her brain using two tools: her mobile phone and a connection to the Internet and, thus, to Web sites like Wikipedia, Google Maps, and Quora. To us, she is unremarkable, but to the man she is astonishing. With our machines, we are augmented humans and prosthetic gods, though we’re remarkably blasé about that fact, like anything we’re used to. Take away our tools, the argument goes, and we’re likely stupider than our friend from the early twentieth century, who has a longer attention span, may read and write Latin, and does arithmetic faster.

No matter which side you take in this argument, you should take note of its terms: that “intelligence” is a matter of (a) calculation and (b) information retrieval. The only point at which the experiment even verges on some alternative model of intelligence is when Wu mentions a question about God’s omnipotence and omnibenevolence. Presumably the woman would do a Google search and read from the first page that turns up.

But what if the visitor from 1914 asks for clarification? Or wonders whether the arguments have been presented fairly? Or notes that there are more relevant passages in Aquinas that the woman has not mentioned? The conversation could come to a sudden and grinding stop, the illusion of intelligence — or rather, of factual knowledge — instantly dispelled.

Or suppose that the visitor says that the question always reminds him of the Hallelulah Chorus and its invocation of Revelation 19:6 — “Alleluia: for the Lord God omnipotent reigneth” — but that that passage rings hollow and bitter in his ears since his son was killed in the first months of what Europe was already calling the Great War. What would the woman say then? If she had a computer instead of a smartphone she could perhaps see if Eliza is installed — or she could just set aside the technology and respond as an empathetic human being. Which a machine could not do.

Similarly, what if the visitor had simply asked “What is your favorite flavor of ice cream?” Presumably then the woman would just answer his question honestly — which would prove nothing about anything. Then we would just have a person talking to another person, which we already know that we can do. “But how does that help you assess intelligence?” cries the exasperated experimenter. What’s the point of having visitors from 1914 if they’re not going to stick to the script?

These so-called “thought experiments” about intelligence deserve the scare-quotes I have just put around the phrase because they require us to suspend almost all of our intelligence: to ask questions according to a narrowly limited script of possibilities, to avoid follow-ups, to think only in terms of what is calculable or searchable in databases. They can tell us nothing at all about intelligence. They are pointless and useless.

faith and (in) AI

Freddie deBoer:

Now people have a variety of ways to dismiss these issues. For example, there’s the notion of intelligence as an ‘emergent phenomenon.’ That is, we don’t really need to understand the computational system of the brain because intelligence/consciousness/whatever is an ‘emergent phenomenon’ that somehow arises from the process of thinking. I promise: anyone telling you something is an emergent property is trying to distract you. Calling intelligence an emergent property is a way of saying ‘I don’t really know what’s happening here, and I don’t really know where it’s happening, so I’m going to call it emergent.’ It’s a profoundly unscientific argument. Next is the claim that we only need to build very basic AI; once we have a rudimentary AI system, we can tell that system to improve itself, and presto! Singularity achieved! But this is asserted without a clear story of how it would actually work. Computers, for all of the ways in which they can iterate proscribed functions, still rely very heavily on the directives of human programmers. What would the programming look like to tell this rudimentary artificial intelligence to improve itself? If we knew that, we’d already have solved the first problem. And we have no idea how such a system would actually work, or how well. This notion often is expressed with a kind of religious faith that I find disturbing.

Freddie’s important point reminds of of a comment in Paul Bloom’s recent essay in the Atlantic on brain science: “Scientists have reached no consensus as to precisely how physical events give rise to conscious experience, but few doubt any longer that our minds and our brains are one and the same.” (By the way, I don’t know what Freddie’s precise views are on these questions of mind, brain, and consciousness, so he might not agree with where I’m taking this.) Bloom’s statement that cognitive scientists “have reached no consensus” on how consciousness arises rather understates things: it would be better to say that they have no idea whatsoever how this happens. But that’s just another way of saying that they don’t know that it does happen, that “our minds and our brains are one and the same.” It’s an article of faith.

The problems with this particular variety of faith are a significant theme in David Bentley Hart’s The Experience of God, as, for instance, in this passage:

J. J. C. Smart, an atheist philosopher of some real acuity, dismisses the problem of consciousness practically out of hand by suggesting that subjective awareness might be some kind of “proprioception” by which one part of the brain keeps an eye on other parts of the brain, rather as a device within a sophisticated robot might be programmed to monitor the robot’s own systems; and one can see, says Smart, how such a function would be evolutionarily advantageous. So the problem of how the brain can be intentionally directed toward the world is to be explained in terms of a smaller brain within the brain intentionally directed toward the brain’s perception of the world. I am not sure how this is supposed to help us understand anything about the mind, or how it does much more than inaugurate an infinite explanatory regress. Even if the mechanical metaphors were cogent (which they are not, for reasons mentioned both above and below), positing yet another material function atop the other material functions of sensation and perception still does nothing to explain how all those features of consciousness that seem to defy the physicalist narrative of reality are possible in the first place. If I should visit you at your home and discover that, rather than living in a house, you instead shelter under a large roof that simply hovers above the ground, apparently neither supported by nor suspended from anything else, and should ask you how this is possible, I should not feel at all satisfied if you were to answer, “It’s to keep the rain out”— not even if you were then helpfully to elaborate upon this by observing that keeping the rain out is evolutionarily advantageous.

I highly recommend Hart’s book on this topic (and on many others). You don’t have to be a religious believer to perceive that eliminative materialism is a theory with a great many problems.

on the maker ethos

Reading this lovely and rather moving profile of Douglas Hofstadter I was especially taken by this passage on why artificial intelligence research has largely ignored Hofstadter’s innovative work and thought:

“The features that [these systems] are ultimately looking at are just shadows—they’re not even shadows—of what it is that they represent,” Ferrucci says. “We constantly underestimate—we did in the ’50s about AI, and we’re still doing it—what is really going on in the human brain.” 

The question that Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don’t you come study it?

“I have mixed feelings about this,” Ferrucci told me when I put the question to him last year. “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it”—he called Hofstadter’s work inspiring—“but where am I going to go with it? Really what I want to do is build computer systems that do something. And I don’t think the short path to that is theories of cognition.”

Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”

Here I think we see the limitations of what we might call the Maker Ethos in the STEM disciplines — the dominance of the T and the E over the S and the M — the preference, to put it in the starkest terms, for making over thinking.

An analogical development may be occurring in the digital humanities, as exemplified by Stephen Ramsay’s much-debated claim that “Personally, I think Digital Humanities is about building things. […] If you are not making anything, you are not…a digital humanist.” Now, I think Stephen Ramsay is a great model for digital humanities, and someone who has powerfully articulated a vision of “building as a way of knowing,” and a person who has worked hard to nuance and complicate that statement — but I think that frame of mind, when employed by someone less intelligent and generous than Ramsay, could be a recipe for a troubling anti-intellectualism — of the kind that has led to the complete marginalization of a thinker as lively and provocative and imaginative as Hofstadter.

All this to say: making is great. But so is thinking. And thinking is often both more difficult and, in the long run, more rewarding, for the thinker and for the rest of us.

“What Talking with Computers Teaches Us About What It Means to Be Alive”

I have to admit that the cover of this month’s Atlantic, proclaiming “Why Machines Will Never Beat the Human Mind,” left me rather uninterested in reading the article, as claims to have made such a proof almost never hold up. And, indeed, to the extent that the article implies that it has provided a case against artificial general intelligence (AGI), it really hasn’t (for my money, it’s an open question as to whether AGI is possible).

Nonetheless, Brian Christian’s article is easily the most insightful non-technical commentary on the Turing Test I’ve ever read, and one of the best pieces on artificial intelligence in general I’ve read. If he hasn’t disproved AGI, he has done much to show just what a difficult task it would be to achieve it — just how complicated and inscrutable is the subject that artificial intelligence researchers are attempting to duplicate, imitate, and best; and how the AI software we have today is not nearly as comparable to human intelligence as researchers like to claim.

Christian recounts his experience participating as a human confederate in the Loebner Prize competition, the annual event in which the Turing Test is carried out upon the software programs of various teams. Although no program has yet passed the Turing Test as originally described by Alan Turing, the competition awards some conciliatory prizes, including the Most Human Computer and the Most Human Human, for, respectively, the program able to fool the most judges that it is human and the person able to convince the most judges that he is human.

Christian makes it his goal to win the Most Human Human prize, and from his studies and efforts to win, offers a bracing analysis of human conversation, computer “conversation,” and what the difference between the two teaches us about ourselves. I couldn’t do justice to Christian’s nuanced argument if I attempted to boil it down here, so I’ll just say that I can’t recommend this article highly enough, and will leave you with a couple excerpts:

One of the first winners [of the Most Human Human prize], in 1994, was the journalist and science-fiction writer Charles Platt. How’d he do it? By “being moody, irritable, and obnoxious,” as he explained in Wired magazine — which strikes me as not only hilarious and bleak, but, in some deeper sense, a call to arms: how, in fact, do we be the most human we can be — not only under the constraints of the test, but in life?…

We so often think of intelligence, of AI, in terms of sophistication, or complexity of behavior. But in so many cases, it’s impossible to say much with certainty about the program itself, because any number of different pieces of software — of wildly varying levels of “intelligence” — could have produced that behavior.

No, I think sophistication, complexity of behavior, is not it at all. For instance, you can’t judge the intelligence of an orator by the eloquence of his prepared remarks; you must wait until the Q&A and see how he fields questions. The computation theorist Hava Siegelmann once described intelligence as “a kind of sensitivity to things.” These Turing Test programs that hold forth may produce interesting output, but they’re rigid and inflexible. They are, in other words, insensitive — occasionally fascinating talkers that cannot listen.

Christian’s article is available here, and is adapted from his forthcoming book, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive. The New Atlantis also has two related essays worth reading: “The Trouble with the Turing Test” by Mark Halpern, which makes many similar arguments but goes more deeply into the meaning of the “intelligence” we seem to see in conversational software, and “Till Malfunction Do Us Part,” Caitrin Nicol’s superb essay on sex and marriage with robots, which features some of the same AI figures discussed in Christian’s article.