Kitty minus kitty

In my last post, I noted the problems with Michael Anissimov’s attempt to defend “morphological freedom” as following from the civil rights movement. I described the way racism has been historically combated by appealing to what we have in common. This is an inherent problem with comparing “species-ism” to racism, because racism is combated precisely by appealing to our common humanity — that is, to our common species.
But it’s worth noting that a similar point holds when we look at an existing, non-hypothetical debate about interspecies rights and difference: the animal-rights debate. If we apply Mr. Anissimov’s “morphological freedom” argument to that debate, we again find it pretty lacking: Advocates of animal rights don’t argue that we should treat, say, a pig with respect or kindness because it “has a right to be a pig,” but rather because we should empathize with the way that, like us, a pig is intelligent (after a fashion) and has emotions and the capacity for suffering.
In fact, Mr. Anissimov, like many transhumanists, considers himself to be continuing the movement for animal rights in addition to civil rights. It’s all part of the ostensible transhumanist benevolence outreach, the grand quest to end suffering. But their formulation of this is to “reprogram” animals so as to end predation. Cats could go on being cat-like in some way, but we have an obligation to remake them so that they no longer hunt and kill. But have a look at this:
Where is the line here between the feline instincts to hunt and play? Is the hunting aspect of a cat something wholly separable from its nature, something that can be cleanly excised? Isn’t a cat minus its hunting instinct a cat minus a cat?
The suggestion of a project to end predation illustrates the transhumanist inclination to see living beings as simply a collection of components that have no logical dependencies on each other — as independent parts rather than wholes. But, more to the point, it makes the question of morphological freedom a pressing one for transhumanists themselves, who before undertaking such a project would quite seriously have to confront the question, “does a cat have a right to be a cat?”

An Ideal Model for WBE (or, I Can Haz Whole-Brain Emulation)

In case you missed the hubbub, IBM researchers last month announced the creation of a powerful new brain simulation, which was variously reported as being “cat-scale,” an “accurate brain simulation,” a “simulated cat brain,” capable of “matching a cat’s brainpower,” and even “significantly smarter than [a] cat.” Many of the claims go beyond those made by the researchers themselves — although they did court some of the sensationalism by playing up the cat angle in their original paper, which they even titled “The Cat is Out of the Bag.”
Each of these claims is either false or so ill-defined as to be unfalsifiable — and those critics who pointed out the exaggerations deserve kudos.
But this story is really notable not because it is unusual but rather because it is so representative: journalistic sensationalism and scientific spin are par for the course when it comes to artificial intelligence and brain emulation. I would like, then, to attempt to make explicit the premises that underlie the whole-brain emulation project, with the aim of making sense of such claims in a less ad hoc manner than is typical today. Perhaps we can even evaluate them using falsifiable standards, as should be done in a scientific discipline.
How Computers Work
All research in artificial intelligence (AI) and whole-brain emulation proceeds from the same basic premise: that the mind is a computer. (Note that in some projects, the whole mind is presumed to be a computer, while in others, only some subset of the mind is so presumed, e.g. natural language comprehension or visual processing.)
What exactly does this premise mean? Computer systems are governed by layers of abstraction. At its simplest, a physical computer can be understood in terms of four basic layers:
The layers break down into two software layers and two physical layers. The processor is the device that bridges the divide between software and the physical world. It offers a set of symbolic instructions. But the processor is also a physical object designed to correspond to those symbols. An abacus, for example, can be understood as “just” a wooden frame with beads, but it has been designed to represent numbers, and so can perform arithmetic calculations.
Above the physical/software bridge provided by the processor is the program itself, which is written using instructions in the processor’s programming language, also known as the Instruction Set Architecture (ISA). For example, an x86 processor can execute instructions like “add these two numbers,” “store this number in that location,” and “jump back four instructions,” while a program written for the x86 will be a sequence of such instructions. Such programs could be as simple as an arithmetical calculator or as complex as a web browser.
Below the level of the processor is the set of properties of the physical world that are irrelevant to the processor’s operation. More specifically, it is the set of properties of the physical processor that do not appear in the scheme relating the ISA to its physical implementation in the processor. So, for example, a physical Turing Machine can be constructed using a length of tape on which symbols are represented magnetically. But one could also make the machine out of a length of paper tape painted different colors to represent different symbols. In each case, the machine has both magnetic and color properties, but which properties are relevant and which are irrelevant to its functioning as a processor depends on the scheme by which the physical/software divide is bridged.
Note the nature of this layered scheme: each layer requires the layer below it, but could function with a different layer below it. Just like the Turing Machine, an ISA can be implemented on many different physical processors, each of which abstracts away different sets of physical properties as irrelevant to their functioning. And a program, in turn, can be written using many different ISAs.
An Ideal Model for Whole-Brain Emulation
In supposing that the mind is a computer, the whole-brain emulation project proceeds on the premise that the computational model thus outlined applies to the mind. That is, it posits a sort of Ideal Model that can, in theory, completely describe the functioning of the mind. The task of the whole-brain emulation project, then, is to “fill in the blanks” of this model by attempting, either explicitly or implicitly, to answer the following four questions:
1. What is the mind’s program? That is, what is the set of instructions by which consciousness, qualia, and other mental phenomena are given rise in the brain?
2. In which instruction set is that program written? That is, what is the syntax of the basic functional unit of the mind?
3. What constitutes the hardware of the mind? That is, what is the basic functional unit of
the mind? What structure in the brain implements the ISA of the mind?
4. Which physical properties of the brain are irrelevant to the operation of its basic functional unit? That is, which physical properties of the brain can be left out of a complete simulation of the mind?
We could restate the basic premise of AI as the claim that the mind is an instantiation of a Turing Machine, and then equivalently summarize these four questions by asking: (1) What is the Turing Machine of which the mind is an instantiation? And (2) What physical structure in the brain implements that Turing Machine? When and only when these questions can be answered, it will be possible to program those answers into a computer, and whole-brain emulation will be achievable.
Limitations of the Ideal Model
You might object that this analysis is far too literal in its treatment of the mind as a computer. After all, don’t AI researchers now appreciate that the mind is squishy, indefinite, and difficult to break into layers (in a way that this smooth, ideal model and “Good Old-Fashioned AI” don’t acknowledge)?
There are two possible responses to this objection. Either mental phenomena (including intelligence, but also consciousness, qualia, and so forth) and the mind as a whole are instantiations of Turing Machines and therefore susceptible to the model and to replication on a computer, or they are not.
If the mind is not an instantiation of a Turing Machine, then the objection is correct, but the highest aspirations of the AI project are impossible.
If the mind is an instantiation of a Turing Machine, then the objection misunderstands the layered nature of physical and computer systems alike. Specifically, the objection understands that AI often proceeds by examining the top layer of the model — the “program” of the mind — but then denies this layer’s relationship to the layers below it. This objection essentially makes the same dualist error often attributed to AI critics like John Searle: it argues that if a computational system can be described at a high level of complexity bearing little resemblance to a Turing Machine, then it does not have some underlying Turing Machine implementation. (There is a deep irony in this objection — about which, more in a later post.)
There is a related question about this Ideal Model: Suppose we can ascertain the Turing Machine of which the mind is an instantiation. And suppose we then execute this program on a digital computer. Will the computer then be a mind? Will it be conscious? This is an open question, and a vexing and tremendously important one, but it is sufficient simply to note here that we do not know for certain whether such a scenario would result in a conscious computer. (If it would not, then certain premises of the Ideal Model would be false — but about this, more, also, in a later post.)
A third, and much more pressingly relevant, note about the model. For similar reasons to the fact that we do not know if simulating the brain at a low level will give rise to the high-level phenomena of the mind, it is also the case that even if and when we create a completely accurate model of the brain, we will not necessarily understand the mind. This is, again, because of the layered nature of physical and computational systems. It is just as difficult to understand a low-level simulation of a complex system as it is to understand the original physical system. In either case, higher-level behavior must be additionally understood — just as looking at the instructions executing on a computer processor allows you to completely predict the program’s behavior but does not necessarily allow you to understand its higher-level structure; and just as Newton would not necessarily have discerned his mechanical theories by making a perfectly accurate simulation of an apple falling from a tree. (I explained this layering in more depth in this recent New Atlantis essay.)
Achieving and Approximating the Ideal Model
Again, the claim in this post is that the Ideal Model presented here is the implicit model on which the whole-brain emulation project proceeds. Which brings us back to the “cat-brain” controversy.
When we attempt to analyze how the paper’s authors “fill in the blanks” of the Ideal Model, we see that they seem to define each of the levels (in some cases explicitly, in others implicitly) as follows: (1) the neuron is the basic functional unit of the mind; (2) everything below the level of the neuron is irrelevant; (3) the neuron’s computational power can be accurately replicated by simulating only its electrical action potential; and (4) the program of the mind is encoded in the synaptic connections between neurons. The neuron-level simulation appears to be quite simple, omitting a great level of detail without offering justification or explanation for whether these details are relevant and what might be the effects of omitting them if they are relevant.
Aside from the underlying question of whether such an Ideal Model of the mind really exists — that is, of whether the mind is in fact a computer — the most immediate question is: How close have we come to filling in the details of the Ideal Model? As the “cat-brain” example should indicate, the answer is: not very close. As Sally Adee writes in IEEE Spectrum:
Jim Olds (who directs George Mason University’s Krasnow Institute for Advanced Study, and who is a neuroscientist) explains that what neuroscience is sorely lacking is a unifying principle. “We need an Einstein of neuroscience,” he says, “to lay out a fundamental theory of cognition the way Einstein came up with the theory of relativity.” Here’s what he means by that. What aspect of the brain is the most basic element that, when modeled, will result in cognition? Is it a faithful reproduction of the wiring diagram of the brain? Is it the exact ion channels in the neurons?…
No one knows whether, to understand consciousness, neuroscience must account for every synaptic detail. “We do not have a definition of consciousness,” says [Dartmouth Brain Engineering Laboratory Director Richard] Granger. “Or, worse, we have fifteen mutually incompatible definitions.”
The sorts of approximation seen in the “cat-brain” case, then, are entirely understandable and unavoidable in current attempts at whole-brain emulation. The problem is not the state of the art, but the overconfidence in understanding that so often accompanies it. We really have no idea yet how close these projects come to replicating or even modeling the mind. Note carefully that the uncertainty exists particularly at the level of the mind rather than the brain. We have a rather good idea of how much we do and do not know about the brain, and, in turn, how close our models come to simulating our current knowledge of the brain. What we lack is a sense of how this uncertainty aggregates at the level of the mind.
Many defenders of the AI project argue that it is precisely because the brain has turned out to be so “squishy,” indefinite, and unlike a computer, that approximations at the low level are acceptable. Their argument is that the brain is hugely redundant, designed to give rise to order at a high level out of disorder at a low level. This may or may not be the case, but again, if it is, we do not know how this happens or which details at the low level are part of the “disorder” and thus safely left out of a simulation. The aggregate low-level approximations may simply be filtered out as noise at a high level. Alternately, if the basic premise that the mind is a computer is true, then even miniscule errors in approximation of its basic functional unit may aggregate into wild differences in behavior at the high level, as they easily can when a computer processor malfunctions at a small but regular rate.
Until we have better answers to these questions, most of the claims such as those surrounding the “cat brain” should be regarded as grossly irresponsible. That the simulation in question is “smarter than a cat” or “matches a cat’s brainpower” is almost certainly false (though to my knowledge no efforts have been made to evaluate such claims, even using some sort of feline Turing Test — which, come to think of it, would be great fun to dream up). The claim that the simulation is “cat-scale” could be construed as true only insofar as it is so vaguely defined. Such simulations could rather easily be altered to further simplify the neuron model, shifting computational resources to simulate more neurons, resulting in an “ape-scale” or “human-scale” simulation — and those labels would be just as meaningless.
When reading news reports like many of those about the “cat-brain” paper, the lay public may instinctively take the extravagant claims with a grain of salt, even without knowing the many gaps in our knowledge. But it is unfortunate that reporters and bloggers who should be well-versed in this field peddle baseless sensationalism. And it is unfortunate some that researchers should prey on popular ignorance and press credulity by making these claims. But absent an increase in professional sobriety among journalists and AI researchers, we can only expect, as Jonah Lehrer has noted, many more such grand announcements in the years to come.