Facial-Recognition Technology Affects African Americans More Often

A Georgetown study shows that African Americans are disparately affected by facial-recognition technology, and the ACLU wants the Justice Department to take action.

“The Perpetual Line-Up: Unregulated Police Face Recognition in America”
“The Perpetual Line-Up: Unregulated Police Face Recognition in America” Georgetown Law screenshot

A Georgetown University think tank and the American Civil Liberties Union, along with 52 other civil liberties organizations, are urging the U.S. Department of Justice to look into the way federal, state and local law-enforcement agencies use face-recognition technology, which they say is having a “disparate impact on communities of color.”

On Tuesday the Georgetown Law Center on Privacy & Technology published “The Perpetual Line-Up: Unregulated Police Face Recognition in America” (pdf), a study into the use (and misuse) of facial-recognition technology by law-enforcement agencies. The study found that law-enforcement facial recognition affects 117 million Americans, with 1 in 2 American adults being found in a law-enforcement face-recognition network.

Among its key findings, the study shows that face recognition will disproportionately affect African Americans; law-enforcement face recognition is unregulated and in many cases out of control; and most law-enforcement agencies do little to ensure that their systems are accurate.

The 150-page study goes into the background of facial-recognition technology, explores the risks of the technology, and provides information on specific city and state uses of the technology.

In response to the report, the ACLU sent a letter to the Justice Department’s Civil Rights Division asking that the DOJ investigate to see whether or not facial-recognition technology has a disparate effect on communities of color.  

Citing Tuesday’s report from Georgetown as well as a prominent 2012 study co-authored by an FBI expert, the ACLU noted that facial-recognition algorithms are 5 to 10 percent less accurate on African Americans than whites, which means that innocent African Americans could mistakenly be placed on a suspect list or investigated for a crime because a flawed algorithm identified the wrong person.

According to the ACLU, these inaccuracies are compounded by the fact that biased policing practices mean that people of color are more likely to be overrepresented in mug shot databases, something that many facial-recognition systems rely on.

The ACLU also pointed out how these systems can be used to disrupt free speech:

Moreover, there is evidence that this technology is being used at protests and rallies, raising significant First Amendment concerns. Specifically, an investigation by the ACLU of North California, the Center for Media Justice, and ColorofChange.org revealed that the Baltimore Police Department had used face recognition technology, in conjunction with social media monitoring tools, to locate, identify, and arrest certain protesters in the wake of Freddy Gray’s death.

The Georgetown study recommends that Congress and state legislatures pass commonsense laws that regulate law-enforcement facial-recognition technologies, and that the FBI and DOJ make significant reforms to the FBI’s face-recognition system.

Read more at the Georgetown Law Center on Privacy & Technology (pdf) and the ACLU.