Study: Deepfakes can trick some facial recognition systemsMarie Donlon | September 04, 2022
A team from the Penn State College of Information Sciences and Technology has determined that some components of facial recognition technology do not always detect deepfakes — synthetic media wherein a person in an existing image or video is digitally replaced with someone else's likeness.
Specifically, facial liveness verification, which is a feature of facial recognition technology that relies on computer vision to verify the presence of a live user, is highly vulnerable to deepfake-based attacks.
According to the team of researchers, facial recognition technology using this user detection technique to enable users to unlock their phones, make financial transactions or access medical records are vulnerable to deepfake attacks, potentially leading to significant security concerns for users and applications.
As such, a new deepfake-powered attack framework, dubbed LiveBugger, was developed for performing automated security evaluations of facial liveness verification.
The researchers used deepfake images and videos from two separate data sets to fool the facial liveness verification features on assorted apps used for confirming a user's identity via analysis of static or video images of the user’s face, voice or response to performing an action.
The team determined that four of the most common verification methods currently in use could be easily bypassed using deepfakes.
"Although facial liveness verification can defend against many attacks, the development of deepfake technologies raises a new threat to it, about which little is known thus far," said Changjiang Li, doctoral student of information sciences and technology and co-first author on the paper. "Our findings are helpful for vendors to fix the vulnerabilities of their systems."
Further, the researchers explained: "Facial liveness verification has been applied in many critical scenarios, such as online payments, online banking and government services. Additionally, an increasing number of cloud platforms have begun to provide facial liveness verification as platform-as-a-service, which significantly reduces the cost and lowers the barrier for companies to deploy the technology in their products. Therefore, the security of facial liveness verification is highly concerning."