Seeing and Believing

John Ruskin, in Modern Painters (1843), defined the “pathetic fallacy” this way: “false appearances … entirely unconnected with any real power or character in the object, and only imputed to it by us.” He was largely but not entirely critical of this fallacy for its tendency to produce bad poetry. But as reflecting certain kinds of human characters, the story was more complex:The temperament which admits the pathetic fallacy, is … that of a mind and body in some sort too weak to deal fully with what is before them or upon them; borne away, or over-clouded, or over-dazzled by emotion; and it is a more or less noble state, according to the force of the emotion which has induced it. For it is no credit to a man that he is not morbid or inaccurate in his perceptions, when he has no strength of feeling to warp them; and it is in general a sign of higher capacity and stand in the ranks of being, that the emotions should be strong enough to vanquish, partly, the intellect, and make it believe what they choose. But it is still a grander condition when the intellect also rises, till it is strong enough to assert its rule against, or together with, the utmost efforts of the passions; and the whole man stands in an iron glow, white hot, perhaps, but still strong, and in no wise evaporating; even if he melts, losing none of his weight.I was reminded of the pathetic fallacy by this music video:

NO “Stay With Me” from Ryan Reichenfeld on Vimeo.However charming in its own way, this video is certainly an instance of “false appearances.” But it is less clear just what emotion the filmmakers are “over-dazzled” by, or whether they are to be credited with an emotion sufficiently powerful to overwhelm a strong intellect, or rather with a weak intellect easily mislead by emotion. I’m inclined to think Ruskin would find it bad poetry: What is the point of ascribing human emotional characteristics to crash-test dummies? One might as well feel bad for the car being crashed. Does it add anything to the longing of the song’s lyrics to have them reflected in an impossible scenario, or is it rather some post-modern ironic distancing from longing, an unwillingness to commit to it even while expressing it?Perhaps a recent interview with Sherry Turkle, the erstwhile techno-optimist, helps to clarify this particular pathetic fallacy. Turkle has written a book called Alone Together, which she calls “a book of repentance in the sense that I did not see this coming, this moment of temptation that we will have machines that will care for us, listen to us, tend to us.” She explains:People are so vulnerable and so willing to accept substitutes for human companionship in very intimate ways. I hadn’t seen that coming, and it really concerns me that we’re willing to give up something that I think defines our humanness: our ability to empathize and be with each other and talk to each other and understand each other. And I report to you with great sadness that the more I continued to interview people about this, the more I realized the extent to which people are willing to put machines in this role. People feel that they are not being heard, that no one is listening. They have a fantasy that finally, in a machine, they will have a nonjudgmental companion.The video takes this idea one step further — a companion that will save us from the mere humans who are not hearing us. I suspect that here is the pathetic fallacy at the heart of social robotics. It is a vicious circle. The more we put our hopes in machine companions, the less we expect from each other, and the less we expect from each other, the more we will accept the substitute of machine companions. Thus does “only connect” become “just plug it in.”

All the lonely people — where do they all belong?

Scientific American reports that a project known as Robot Companions, “which will develop soft-bodied ‛perceptive’ robots as companions for the lonely,” has been selected as a finalist for an EU competition that will award one billion Euros (1.4 billion US dollars) over ten years to

two huge flagship projects that will apply information and communication technologies to social problems. The Future and Emerging Technologies (FET) Flagships aim to unite Europe’s scattered academic forces around well-defined missions that feed directly into the European Union’s social or political goals. “They are like ‛moon-landing’ projects,” says Henry Markram, a neuroscientist at the EPFL in Lausanne, Switzerland…

Kismet, a “social” robot.

Who knew it was one of the EU’s social and political goals to deal with the problem of loneliness? But if one is so bold as to try to solve for good what might otherwise seem to be a way of human being-in-the-world as perennial as sadness itself, then is a “smart” stuffed animal really the way to go?
Of course, people can find emotional solace and support in nearly anything, as readers of Peanuts know full well. But I figure there are essentially two possibilities here. One is that our lonely subject is sufficiently disturbed to be fooled into thinking his soft-bodied companion really is a companion — that it really cares for him and needs his care for its own emotional and physical wellbeing. In that case, Europeans would be spending over a billion dollars to encourage delusions — probably not for the first time.The second possibility is that our lonely subject is sufficiently lonely and willing to fool himself into thinking his companion really is a companion. In that case, Europeans will have spent their money to reduce the amount of imagination lonely people need in order to find substitutes for human contact — not to mention the need to help lonely people find actual human contact.Let me put the point another way. Strictly speaking, lonely people are those who feel they are missing human relationships. They want those relationships, at least at some level, but something in their will or their circumstances stands in the way. Imagine one society that puts time, effort and money into overcoming those impediments, and allowing people who are lonely to get real human companionship. Imagine another where some large portion of those resources is expended on giving them ways to avoid the real human companionship that defines their loneliness. Which society is actually doing more about loneliness? Which seems to be the more humane?
Our developing abilities in computing and IT are amazing. But we are in that heady stage where, given this bright new hammer, everything looks like a nail. As we flail around, let’s hope not too much damage is done to some of the more fragile aspects of our world, like lonely people.Editor’s note: As mentioned in the previous post, see also our colleague Caitrin Nicol’s sagacious essay “Till Malfunction Do Us Part” for an exploration of robotic companionship, intimacy, and marriage.

The Blending of Humans and Robots

David Gelernter has written a characteristically thought-provoking essay about what guidance might be gleaned from Judaism for how human beings ought to treat “sophisticated anthropoid robots” with artificial intelligence powerful enough to allow them to respond to the world in a manner that makes them seem exactly like us. Taking his cue from Biblical and rabbinic strictures concerning cruelty to animals, he argues that because these robots “will seem human,” we should avoid treating them badly lest we become “more oblivious of cruelty to human beings.”This conclusion, which one might draw as well on Aristotelian as Biblical grounds, is a powerful one — and in a world of demolition derbies and “Will It Blend?,” where even a video of a washing machine being destroyed can go viral, it is hard to deny that Gelernter has identified a potentially serious issue. It was raised with great force in the “Flesh Fair” scenes of the 2001 movie A.I., where we see robots being hunted down, herded together, and subjected to various kinds of creative destruction in front of howling fans. Meanwhile, the robots look on quietly with what I have always found to be heartbreaking incomprehension.And yet, it also seems to me that the ringleader at the Flesh Fair, vicious though he is, is not entirely wrong when he harangues the crowd about the need to find a way to assert the difference between humans and robots in a world where it is becoming increasingly easy to confuse the two. And it is in this connection that I wonder whether Gelernter’s argument has sufficiently acknowledged the challenge to Jewish thought that is being posed by at least some of the advocates of the advanced artificial intelligence he is describing.Gelernter knows full well the “sanctity and ineffable value” that Judaism puts on human life, which is to say he knows that in Jewish thinking human beings are unique within creation. In such a framework, it is understandable why the main concern with animal (or robot) cruelty should be the harm it might do to “our own moral standing” or “the moral stature and dignity of human beings.” But the moral dignity of human beings and our uniqueness in creation is precisely what is coming under attack from transhumanists, as well as the less potent but more widespread forms of scientism and technophilia in our culture. Gelernter is certain that the robot will feel no pain; but what of those who would reply that they will “process” an electrical signal from some part of their bodies that will trigger certain kinds of functions — which is after all what pain “really” is? Gelernter is certain that these anthropoid robots will have no inner life, but what of those, such as Tor Nørretranders and Daniel Dennett, who are busy arguing that what we call consciousness is just “user illusion”?I don’t doubt that Gelernter could answer these questions. But I do doubt that his answers would put an end to all the efforts to convince us that after all we are simply “meat machines.” And if more and more we think of ourselves as “meat machines,” then what Gelernter calls the “pernicious incrementalism” of cruelty to robots that he is reasonably concerned about points in another direction as well: not that we start treating “thous” as “its,” but that in transforming “its” into “thous” we take all the moral meaning out of “human.”It probably should not surprise us that there are dangers of kindness to robots as well as cruelty, but the fact that it is so might prompt us to wonder about the reasons that seem to make going down this road so compelling. Speaking Jewishly, Gelernter might recall the lesson from the pre-twentieth-century accounts of the golem, the legends of pious men creating an artificial anthropoid that go back to the Talmud. Nearly from the start two things are clear about the golem: only the wisest and most pious could ever hope to make one, but the greatest wisdom would be to know how and not to do so.

Day 2 at H+ Summit: George Dvorsky gets serious

The 2010 H+ Summit is back underway here at Harvard, running even later than yesterday. After the first couple of talks, the conference launches into a more philosophical block, which promises a break in the doldrums of most of these talks so far. First up in this block is George Dvorsky (bio, slides, on-the-fly transcript), who rightly notes that ethical considerations have largely gone unmentioned so far at this conference. And how. He also notes in a tweet that “The notion that ethicists are not needed at a conference on human enhancement is laughable.” Hear hear.
Dvorsky’s presentation is primarily concerned with machine consciousness, and ensuring the rights of new sentient computational lifeforms. He’s not talking about robots, he says, like the ones we have today that are not sentient but are anthropomorphized to evoke our responses as if they were. (Again, see Caitrin Nicol in The New Atlantis on this subject.) Dvorsky posits that these robots have no moral worth. For example, he says, you may have seen this video before — footage of a robot that looks a bit like a dog and is subjected to some abuse:
Even though many people want to feel sorry for the robot when it gets kicked, Dvorsky says, they shouldn’t, because it has no moral worth. Only things with subjective awareness have moral worth. I’d agree that moral worth doesn’t inhere in such a robot. But as for subjective awareness as the benchmark, what about babies and the comatose, even the temporarily comatose? Do they have any moral worth? Also, it is not a simple matter to say that we shouldn’t feel sorry for the robot even if it doesn’t have moral worth. Isn’t it worth considering the effects on ourselves when we override our instincts and intuitions for empathy toward what seem to be other beings, however aptly-directed those feelings may be? Is protecting the rights of others entirely a matter of our rational faculties?
Dvorsky continues by describing problems raised by advancing the moral rights of machines. One, he says, is human exceptionalism. (And here the notion of human dignity gets its first brief mention at the conference.) Dvorsky derides human exceptionalism as mere “substrate chauvinism” — the idea that you must be made of biological matter to have rights.
He proposes that conscious machines be granted the same rights as human beings. Among these rights, he says, should be the right not to be shut down, and to own and control their own source code. But how does this fit in with the idea of “substrate chauvinism”? I thought the idea was that substrate doesn’t matter. If it does matter — to the extent that these beings have special sorts of rights like owning their own source code that not only don’t apply but have no meaning for humans — doesn’t this mean that there is some moral difference for conscious machines that must be accounted for rather than scoffed off with the label “substrate chauvinism”?
George Dvorsky has a lot of work to do with resolving incoherences in his approach to these questions. But he deserves credit for trying, and for offering the first serious, thoughtful talk at this conference. The organizers should have given far more emphasis and time to presenters like him. Who knows how many of the gaps in Dvorsky’s argument might have been filled if he had been given more than the ten-minute slot that they’re giving everybody else here with a project to plug.

Heather Knight and the real boy

[Continuing coverage of the 2010 H+ Summit at Harvard.]
Fem and bot: Heather Knight faces her 'Star Wars'-performing robot, while H+ chairman David Orban looks on, kneeling.

Heather Knight (bio) had the first presentation after lunch. She’s a young computer scientist, fresh out of M.I.T. undergrad, and she is interested in (even an evangelist for) socialized robotics. She goes through some of the standard stuff about making robots that can sense and imitate human emotions, and then starts in with attempting to contrive reasons for doing this, such as amusing kids who are waiting for their parents to pick them up. (A teddy bear won’t suffice?) This echoes a chain of dialogue about socialized robotics going back at least to the 1960s and Joseph Weizenbaum’s ELIZA, a simple text-based program that tricked people into thinking it could converse with them, and that many people seriously suggested be used as a psychological therapist.

Knight’s presentation is low on content; even for socialized robotics, a field aimed at tricking people into believing there is complex behavior where there is not, the robotics presentation she puts on is very elementary — a prefabricated robot with minimal voice-recognition capability, which summarizes Star Wars, complete with sound effects. The audience eats it up, though. This is clearly a presentation in many different ways aimed at style over substance. At least one Twitterer was a fan of the show, though, and another aptly noted, “I think Heather Knight thinks her performing robot is a real boy.”
Knight has the same problem as every other social roboticist, which is the blithe belief that she can recreate through pure engineering a “system” (i.e., human interaction) that is as complex as anything we know — and, moreover, can recreate it from whole cloth, without any apparent engagement with or even awareness of the wealth of thought about social life. For more about this, see our New Atlantis colleague Caitrin Nicol’s wonderful essay about the follies of social robotics.
I don’t mean to pick on Heather Knight. Like most other presenters, Knight is just presenting her research here, not purporting to present some grand unified theory; but few of them seem to realize the intellectual burdens that making these sorts of claims must bear. She certainly has a stage presence, though, and it’s striking that her talk just reflects the trend of almost every presentation here so far having either been on an obscure and relatively unimportant technical subject, or else a repetition of stock transhumanist ideas. There’s been almost nothing new here. I’m not sure whether to blame the presenters or the organizers, but hopefully it’ll pick up as the conference goes on.