Illustration of a face morphing attack. The original images on the left and right were morphed to create the fake image (center). Source: Fraunhofer HHIIllustration of a face morphing attack. The original images on the left and right were morphed to create the fake image (center). Source: Fraunhofer HHIResearchers in Germany have developed a system that uses machine learning to protect facial recognition software from so-called "morphing attacks."

These threats involve the melding of two facial images of two different people into one synthetic facial image composed of characteristics from both subjects for the purpose of tricking facial recognition systems. Morphing attacks have been growing in step with the increased use of facial recognition software. The attacks are reportedly enabling criminals to misuse passports or unlock smartphones by using the manipulated images to authenticate the identity of two different people.

As such, the team has developed a technique for identifying anomalies that occur in the images during digital image processing amid the morphing process. Through the Anomaly Detection for Prevention of Attacks on Authentication Systems Based on Facial Images (ANANAS) project, the team is analyzing and researching simulated imaging data and applying modern image processing and machine learning methods like deep neural networks for processing image data. These networks are composed of several layers that are linked to one another in multilayer structures. They are inspired by the connections between mathematical calculation units and mimic the neural structure of the brain.

The researchers are from the Fraunhofer Institute for Production Systems and Design Technology, Fraunhofer Institute for Telecommunications and Heinrich Hertz Institute (HHI) in Berlin.

To test the systems under development, the team began by producing the data for training the image processing programs used to identify manipulations. At this stage, the research team morphs different faces into one single face.

"Using morphed and real facial images, we've trained deep neural networks to decide whether a given facial image is authentic or the product of a morphing algorithm. The networks can recognize manipulated images based on the changes occurring during manipulation, especially in semantic areas such as facial characteristics or reflections in the eyes," explained Peter Eisert, head of the Vision & Imaging Technologies department at Fraunhofer HHI.

According to the team, the neural networks demonstrated an accuracy rate of more than 90% during testing. However, the team does not know how the neural network reaches its conclusions, according to Eisert. Consequently, the researchers are also exploring the basis for the network’s decision making by analyzing the areas in the facial image that are relevant to the ultimate decision with help from Layer-Wise Relevance Propagation (LRP) algorithms that they devised themselves.

According to researchers, these algorithms identify areas of suspicion in a facial image. Likewise, they found the eyes in the image typically offer evidence of image tampering.

Currently, a demonstrator software package that includes anomaly detection and evaluation procedures, already exists. The package contains several different detector modules from the team that have been combined. The combined modules apply different detection methods to locate manipulations, which produces a result at the completion of the process. The team hopes to eventually incorporate the software into current facial recognition systems at border checkpoints, for instance.

To contact the author of this article, email mdonlon@globalspec.com