With every picture or video you post of yourself on social media, facial recognition algorithms get more and more information about you including who you are, where you are and who you are with. As the information continues to be fed into the system, the facial recognition tech only improves.
As such, the growing concern over privacy and data security in these environments has prompted a group of University of Toronto engineers to take steps to better secure this information by developing an algorithm that would hamper facial recognition efforts.
"Personal privacy is a real issue as facial recognition becomes better and better," said Professor and lead study author Parham Aarabi. "This is one way in which beneficial anti-facial-recognition systems can combat that ability."
Using a deep learning technique called adversarial training, the researchers have designed two opposing artificial intelligence (AI) algorithms: the first algorithm, which identifies faces and a second algorithm that works against the first, muddling facial recognition attempts.
As the two algorithms battle it out, the result is a filter that protects the privacy of photos by distorting specific pixels in the image. Although there are alterations to the image, they are undetectable to anyone looking at the image.
"The disruptive AI can 'attack' what the neural net for the face detection is looking for," says Bose. "If the detection AI is looking for the corner of the eyes, for example, it adjusts the corner of the eyes so they're less noticeable. It creates very subtle disturbances in the photo, but to the detector they're significant enough to fool the system."
The engineers trained the system on over 600 face images that included different environments, ethnicities and lighting conditions.
"The key here was to train the two neural networks against each other — with one creating an increasingly robust facial detection system, and the other creating an ever stronger tool to disable facial detection," says Avishek Bose, a lead author on the project.
"Ten years ago these algorithms would have to be human-defined, but now neural nets learn by themselves — you don't need to supply them anything except training data," says Aarabi. "In the end they can do some really amazing things. It's a fascinating time in the field, there's enormous potential."
The study will be published and presented at the 2018 IEEE International Workshop on Multimedia Signal Processing this summer.