Researchers from Cornell University said that algorithms used by employers to make hiring decisions may not be as unbiased as they claim to be.

A study of 19 vendors that offer pre-employment screening services based on algorithms and that claim to be "fair" in assessing job candidates found that claims of fairness may be questionable.

"Mitigating Bias in Algorithmic Employment Screening: Evaluating Claims and Practices" will be presented by the Cornell team in January 2020 at the Association for Computing Machinery Conference on Fairness, Accountability and Transparency, which will be held in Barcelona, Spain.

Third-party vendors are hired to take on the task of making hiring decisions for some employers. The researchers determined that little public information is available about the algorithms that are used for candidate screenings.

Pointing to the use of terms such as "fair" to describe what is marketed to potential customers as unbiased screening, the university researchers found that vendors rarely explained how they defined the word. Because there are multiple ways to interpret the word, researchers said they were concerned that the label is vague and misleading, potentially rendering the screening algorithm itself as biased.

Survey respondents in a recent Harris Poll reportedly expressed concern over artificial intelligence (AI) playing a role in hiring decisions. Nearly 70% said they were uncomfortable with the idea. Even so, researchers from the University of Minnesota’s Carlson School of Management are using machine learning to help make hiring predictions about teaching applicants. Meanwhile, a tech recruiting firm is using a face-scanning algorithm to make hiring decisions.

To contact the author of this article, email mdonlon@globalspec.com