[Continuing coverage of the 2009 Singularity Summit.]

Another talk is underway now, this one titled “Technological Convergence Leading to Artificial General Intelligence” by Itamar Arel of the University of Tennessee. (Abstract and bio available here.)

Arel says he wants to focus on AI achieved in a much shorter timeframe than what’s been talked about so far. He asks the audience how many people think we’ll be able to achieve an artificial general intelligence in the next ten years. About a fifth or so raise their hands, and he’s surprised at how few that is, especially considering the audience. He hopes to convince us otherwise in his talk.

[An aside: The wireless Internet in the auditorium is horrible. If the Singularity is near at hand, I hope that the capability to get connectivity to a room full of people with a small portion using laptops is nearer.]

Arel is outlining a system he’s developed that mimics (which is not to say simulates) the cortex. It uses the parallelism of the cortex to get past that dreaded Von Neumann bottleneck. The cortex is an “inspired” biological design, he’s now said several times. It is able to discover structures based on regularities. He’s talking now about the features of the brain that allow it to discard massive amounts of sensory data and only pick out what’s relevant.

He’s talking now about the importance of rewards and reinforcement in learning. This has been a common theme of the talks so far: intelligence is there to solve a goal, and we need to massively help it along in achieving that goal. Haven’t really heard any stabs at what the goal is yet, other than cutesy examples of goals for existing intelligence (he just showed a picture of a dog trying to fetch a bone).

He’s talking about how his program has been able to successfully detect emotions in faces — smiling and frowning in particular. It has been able to detect this just from examples using “reinforcement” to tell it whether it was right or wrong. I don’t see what this has to do with reinforcement as he was describing earlier — as in rewards and punishments. This is just the basic premise of optimization learning algorithms that have been around since the 1960s — they have to self-modify and do so on the basis of information as to whether they were right or wrong. The learning algorithms may be far more powerful than they used to be, but the basic principle is the same.

Arel’s talk is over now. I didn’t see anything here in the way of evidence that AGI (artificial general intelligence) is just around the corner, which was his claim at the outset.

Nothing of note in the questions. Someone asked how many resources we should devote to achieving AGI. Arel’s answer: lots.

Well then.

0 Comments