A team of researchers from the Quattrone Center for the Fair Administration of Justice at the University of Pennsylvania Carey Law School suggests that artificial intelligence (AI) can improve the accuracy of the criminal adjudication process.

To demonstrate how AI could improve the accuracy of this process, the researchers trained a large language model (LLM) to analyze eyewitness confidence statements to help reduce the incidence of misidentification — which is reportedly a significant contributor to wrongful convictions.

"Of the over 3,000 exonerations recorded to date by the National Registry of Exonerations, more than 900 are due to eyewitness misidentifications," the researchers explained. "To help address this problem, we developed a new tool to enable attorneys and police investigators to better distinguish accurate identifications from faulty ones."

Typically, once an eyewitness identifies a suspect from a lineup, police officers are tasked with asking, according to best practices, how confident the eyewitness is in their identification. Earlier research suggests that highly confident witnesses tend to be correct much more frequently than those who demonstrate uncertainty during a lineup. Yet, this method is prone to human error, wherein a police officer might misread signals about confidence or lack thereof.

As such, the AI tool created by the researchers presents a neutral approach for assessing these statements absent contextual bias, leveraging data culled from thousands of prior eyewitness decisions. The LLM reportedly offers an objective approach for making judgments, thereby reducing the ambiguity present in verbal confidence statements.

To interpret the intended meaning of a witness' confidence statement, the LLM reads and subsequently ranks eyewitness confidence statements as high confidence (75% to 100%), medium confidence (26% to 74%) or low confidence (0 to 25%). To perform this ranking, the team trained the model using a sample of eyewitnesses who described their confidence both in their own words and using numbers. Ultimately, the witness-provided numeric translations were used to identify the so-called "ground-truth" of how confident eyewitnesses actually are.

The team tested the LLM’s performance, finding that the model correctly classified witness' level of confidence 71% of the time — performing similarly to trained human coders.

The tool, which is available online for use by researchers and practitioners, is detailed in the article, "Assessing Verbal Eyewitness Confidence Statements Using Natural Language Processing," which appears in the journal Psychological Science.

To contact the author of this article, email mdonlon@globalspec.com