Watch: Robot can mimic human facial expressions
Siobhan Treacy | June 03, 2021Thanks to researchers at Columbia University, robots have taken a big step toward engaging in realistic human-like interactions.
Facial expressions play a huge role in building trust for humans. With the rapidly increasing use of robots to interact with humans, robots need to become more expressive and facially realistic to build trust. Based on this concept, the team spent five years creating EVA, an autonomous robot that has a soft and expressive face that matches the expressions of nearby humans.Eva mimics human facial expressions in real-time from a live-stream camera. The entire system is learned without human labels. Source: Creative Machines Lab/Columbia Engineering
The team was inspired by the recent humanizing of robots. They noticed that some companies and stores have started adding eyes and name tags to their robots. These small steps in humanizing robots made the team wonder, what if robots could mirror human expression?
EVA is a disembodied bust that has resemblance to the Blue Man Group. The device, assembled with 3D-printed components, can express six main emotions: anger, disgust, fear, joy, sadness and surprise. EVA can combine these emotions to also communicate a few nuanced emotions. Artificial muscles mimic the movements of 42 tiny muscles in human faces. The biggest challenge was to create a system compact enough to fit within the confines of a human skull.
While creating EVA, the developers found themselves reacting to EVA’s expressions. "I was minding my own business one day when EVA suddenly gave me a big, friendly smile," said researcher Hod Lipson, "I knew it was purely mechanical, but I found myself reflexively smiling back."
The next task entailed programming artificial intelligence (AI) to guide facial movements. EVA uses deep learning AI to read and mirror nearby human expression. Mimicking a wide range of facial expressions was learned by trial and error.
It was difficult to automate non-repetitive physical movements that take place in social settings. At first, the team found that EVA’s facial movements were too complex to process to be generated by a predefined set of rules. They created EVA’s brain with several deep learning neural networks.
EVA’s brain needed to master two things. First, it needed to learn to use the complex system of mechanical muscles to generate any facial expression. Second, it needed to know which faces to make by reading human faces.
They taught the robot what its own face looked like by recording hours of footage of EVA making random faces and showing it to the neural networks, which learned to pair muscle motion with the video footage of its own face. A second neural network matched the self-image with that of a human face captured on its video camera. EVA learned to read human face gestures from a camera and respond by mirroring.
The team notes that while EVA is currently only a laboratory experiment, this technology could be beneficial in real-world environments, like workplaces, hospitals, schools and homes.
A paper on EVA was published in HardwareX.