Significant advances in large language models, such as OpenAI's ChatGPT, have made it possible to develop robots with interesting verbal communication capabilities, but non-verbal communication remains a major challenge in the field of robotics.
Recently, researchers at the Creative Machines Lab at Columbia Engineering, Columbia University, taught an anthropomorphic robot, called Emo, to anticipate a person's smile and respond in kind.
According to New Atlas, creating a robot that responds to non-verbal signals involves two challenges: creating an expressive but versatile face, which involves incorporating complex hardware and actuation mechanisms, and teaching the robot what expression to produce in time to appear natural and genuine.
Emo is made up of 26 actuators that allow it to produce various facial expressions, as well as containing high-resolution cameras in both pupils that make it possible for the robot to maintain eye contact for non-verbal communication.
To train the machine, the researchers stood in front of the cameras and let it make random movements. This is how Emo learned which motor commands produced certain facial expressions.
The robot then watched videos of human facial expressions to analyze them in detail and, a few hours of training later, it was able to predict people's facial expressions by simply observing small changes in their faces.
According to the portal, Emo predicted a human smile about 840 milliseconds before it happened and responded simultaneously with one of its smiles.
"When a robot makes co-expressions with people in real time, it not only improves the quality of interaction, but also helps build trust between humans and robots," said the author of the scientific article, published in Science Robotics, Yuhang Hu.