Facial recognition technology is lawless, without standards or regulations. The closest it has come to any sort of reining in happened recently in San Francisco, where a city board voted to ban the technology considering its many flaws and its potential for abuse and misuse.

Recently, an Engineering360 article detailed the many flaws and biases inherent in the technology. This feature is intended as a follow-up, examining what, if anything, is being done to fix those issues.

Despite recent reports from the National Institute of Standards and Technology (NIST) that facial recognition technology is reportedly improving, the technology still makes mistakes, notably misidentifying criminal suspects and failing to identify people of color.

The solution for fixing facial recognition technology is a seemingly straightforward one: train the artificial intelligence (AI) algorithms used to build facial recognition technology with “good” and diverse data. Yet, no better demonstration exists of the “garbage in, garbage out” philosophy than facial recognition technology. Put simply, what drives the bias of this technology is that the data used to train facial recognition algorithms is a reflection of the engineer’s world. As engineers building the technology are overwhelmingly white and male, facial recognition algorithms struggle with identifying anyone who isn’t white and male. Apart from developers hiring more diverse coders and engineers to build algorithms and better, more diverse datasets, some researchers and experts in the facial recognition and AI fields are attempting to de-bias algorithms.

MIT-CSAIL

Researchers at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) are attempting to remedy the bias built into algorithms for facial recognition technology with an algorithm that can automatically “de-bias” data, resampling it to be more biased. The team designed an algorithm to examine a dataset, discover what is intrinsically hidden within that dataset and automatically resample the dataset to be fair, all without having to include a programmer in the process.

The MIT CSAIL team details the de-biasing algorithm in their paper Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure.

IBM AI Fairness 360 Kit

The AI Fairness 360 Kit from IBM is an open-source tool that analyzes in real-time why and how algorithms make decisions. Described as a transparent approach for building algorithms, the AI Fairness 360 Kit identifies unwanted bias, whether bias in the training data, in the algorithm that generates the classifier or in the predictions made by the classifier. Much like checking for development bugs or security violations, the AI Fairness 360 toolkit automatically conducts bias checks at points all along the machine learning pipeline.

Flickr images

IBM also recently made headlines for its public release of one million facial images from a Flickr dataset that includes 100 million videos and photos. Each image is reportedly tagged with terms related to features such as age, gender, craniofacial measurements and facial symmetry, with the idea being that detailed data will improve facial recognition outcomes. By releasing such detailed datasets, IBM researchers hope to help developers train their facial recognition systems to better identify faces more accurately and fairly.

Some recommendations

While little is being done in the way of regulating facial recognition algorithms, there are recommendations being made to companies, law enforcement and the government still intent on using the embattled technology. One such recommendation is requiring users to establish a consistent standard for the types of images it will and won’t allow. Specifically, that all parties should refuse to use partial images, 3D enhanced images, celebrity images or forensic sketches to identify wanted suspects.

Another recommendation for improving facial recognition standards is for those parties to establish photo- quality standards concerning pixel densities and a percentage-of-visible-face requirement in the image with the understanding that anyone using those images will discard them if they do not meet those standards.

Yet, some argue that de-biasing could potentially amplify the issue of bias, causing more harm than good. A statement from the AI Now Institute suggests that improving and de-biasing datasets may only serve to further weaponize facial recognition technology against the very people it currently doesn’t work for.

Stay tuned to see how the government intends (or doesn’t) to regulate facial recognition technology amid mounting pressure to regulate it.

To contact the author of this article, email mdonlon@globalspec.com