One of the problems facing people with autism is an inability to pick up on social cues. Failure to notice that they are boring or confusing their listeners can be particularly damaging, says Rana El Kaliouby of the Media Lab at the Massachusetts Institute of Technology. "It's sad because people then avoid having conversations with them."
The "emotional social intelligence prosthetic" device, which El Kaliouby is constructing along with MIT colleagues Rosalind Picard and Alea Teeters, consists of a camera small enough to be pinned to the side of a pair of glasses, connected to a hand-held computer running image recognition software plus software that can read the emotions these images show. If the wearer seems to be failing to engage his or her listener, the software makes the hand-held computer vibrate.
”If the wearer seems to be failing to engage the person listening, the computer vibrates
In 2004 El Kaliouby demonstrated that her software, developed with Peter Robinson at the University of Cambridge, could detect whether someone is agreeing, disagreeing, concentrating, thinking, unsure or interested, just from a few seconds of video footage. Previous computer programs have only detected the six more basic emotional states of happiness, sadness, anger, fear, surprise and disgust. El Kaliouby's complex states are more useful because they come up more frequently in conversation, but are also harder to detect, because they are conveyed in a sequence of movements rather than a single expression.
Her program is based on a machine-learning algorithm that she trained by showing it more than 100 8-second video clips of actors expressing particular emotions. The software picks out movements of the eyebrows, lips and nose, and tracks head movements such as tilting, nodding and shaking, which it then associates with the emotion the actor was showing. When presented with fresh video clips, the software gets people's emotions right 90 per cent of the time when the clips are of actors, and 64 per cent of the time on footage of ordinary people.
El Kaliouby is now training the software on excerpts from movies and footage captured by webcams. This week she plans to gather the first on-the-move training footage by equipping a group of volunteers, some of whom are autistic, with wearable cameras.
Getting the software to work is only the first step, Picard warns. In its existing form it makes heavy demands on computing power, so it may need to be pared down to work on a standard hand-held computer. Other challenges include finding a high-resolution digital camera that can be worn comfortably, and training people with autism to look at the faces of those they are conversing with so that the camera picks up their expressions.
The team will present the device next week at the Body Sensor Network conference at MIT. People with autism are not the only ones who stand to benefit. Timothy Bickmore of Northeastern University in Boston, who studies ways in which computers can be made to engage with people's emotions, says the device would be a great teaching aid. "I would love it if you could have a computer looking at each student in the room to tell me when 20 per cent of them were bored or confused."
No comments:
Post a Comment