As someone who is not exactly a touchstone for the feelings of others, I am intrigued by a New Scientist story by Sally Adee about technologies being developed by MIT Media Lab that can assist people at reading the emotions of others. These devices include a set of goggles that gives cues about the meaning of facial expressions, a badge that monitors voice quality for aggressiveness (called originally a “jerk-o-meter”), and a glove that tests skin galvanic response for stress.Although some of this research has been spurred by efforts to assist the autistic, it may also have some commercial appeal because it turns out that many of us are not that good at picking up the more or less subtle cues about feelings that all of us are providing to others all the time. One could imagine how such tools could be used for exploitative purposes — although with the possible exception of the glove, they pick out information that is already being publicly broadcast, as it were, and probably is what skilled salespeople, gamblers, and seducers are already adept at noticing. So one could also say that as consumer goods, these devices might level the emotional playing field a bit.In times and places where people’s social circles were more constrained and more hierarchical than today, my guess is that there was a greater tendency to notice the visible and vocal cues these technologies are designed to pick up, because the rewards for noticing and the punishments for failing to notice would have been greater. Today, we can more readily skim along at the emotional surface of our interactions, should we feel the need to pay attention to them at all.We’ve documented the result of this emotional superficiality and how it makes life easier for transhumanists often enough on this blog, but two more recent examples are worth noting. The first is a story about ‘lovotics,’ “the new science of engineering human-robot love,” as reported with a straight face on Ray Kurzweil’s blog (though not by the master himself). The video of a project from National University of Singapore shows a little crochet ball flashing colors, making annoying noises and moving mysteriously:We are supposed to believe that, somehow, its actions are based on modeling human endocrine responses and that this in turn is what it will take to allow robots and humans to “love each other.”
The second and even sadder example is one of those confessional blogs that you almost feel bad about reading. In this one, the writer, Sam Biddle, reads an unfavorable evaluation of his performance as a boyfriend by someone he met through the online dating service OK Cupid. He is, to say the least, chagrined by what he finds, but not exactly repentant, as his remarkable concluding paragraphs suggest:
This is a weird way to find out you’ve hurt someone. But why should it be surprising? OK Cupid (and the rest of the bunch) abstract the human element away from love and sex. And that’s fine! Desirable at times, even. We’re living in an abstracted age, where conversations are condensed and pictures are cropped and feelings often don’t matter. The crevasse between someone’s decent OK Cupid profile and caring about an actual human being is a wide one — and the simplicity of dating sites doesn’t prepare you for the leap. Of the online dates I used to go on, their terminuses weren’t some shouting match or personality clash. It was just apathy. Meeting people in real life is tough! That’s why dating sites make money. We don’t like tough. But these flings disintegrate as easily as they form, victims of their own convenience.And they make it easier to hurt someone, because, truthfully, you never cared that much to begin with. When canceling a date is as easy as canceling an Amazon shipment, what are we to expect from each other? People come off as bitchy and rude and careless because the internet lets us be this way — because we demand it! Is this good? Is it even sustainable? I’m not sure. But I am pretty sure that I never [as she claims] sexted that girl. Really. I mean come on.
Meeting people is tough these days, I suppose, but I would have hoped that caring about them was sort of the point of the enterprise. If that is not the goal, then, if we don’t just turn the task over to computers, maybe the MIT Media Lab’s new emotionally assistive technologies could improve what we now hope to get out of our interactions with one another. Their good effect would be enhanced if it turns out that they help people read other people even when the technologies are no longer being used — as one researcher suggests is somewhat the case for autistic users. But then, mightn’t this ability be something that we could teach and learn just through paying attention to other people, and learning to be genuinely interested in them, if we were so inclined?
Augmented reality is intriguing because it keeps humans in the loop. It seems like these technologies could eventually become AR. But I have to admit that the privacy implications make me uncomfortable — there's no practical way to limit computer vision to use on willing subjects. (In fact according to Antonio Torralba that is basically the point of the field.) If the skills lingered after training, though, and only this residue was employed on unsuspecting subjects, then it would be harder to argue against.
PUA? Join the Roissysphere!
Comments are closed.