Fujitsu Laboratories, in collaboration with researchers from the Carnegie Mellon University School of Computer Science, has developed artificial intelligence (AI)-based facial recognition technology capable of detecting more subtle emotions, such as confusion and nervousness, with improved accuracy.

The joint team devised a new extractive process, called the “normalization process,” that culls data from a single image of a subject that has been created using images of the subject captured at different angles to replicate a front-facing image. This stance is ideal for capturing a subject for the purpose of emotion tracking, though it is not often achievable as images captured in the wild, so to speak, are generally at odd angles.

However, those images are typically used to build the enormous datasets that train currently available emotion tracking AI, which are reportedly less accurate as well as incapable of detecting more complex emotions such as nervousness or confusion. While technology already exists for detecting a subject’s emotions based on their facial expressions, otherwise known as action units (AUs), typically it only tracks basic emotions — such as contempt, anger, disgust, fear, sadness, happiness, surprise and neutrality.

According to researchers, the tool from Fujitsu, however, makes the need for such large datasets unnecessary because it better detects AUs from the front-facing images, thereby improving detection of emotional changes in the subjects with a reported 81% rate of accuracy.

Possible applications for the technology include improving the realistic appearance of humanoid robots, improving road safety by using the technology to detect subtle changes in a driver’s focus, measuring employee engagement and improving workplace safety, to name just a few.

To contact the author of this article, email