Skip to content Skip to footer

Settlement attained in facial recognition lawsuit by Detroit police department.

The Detroit Police Department has reached a settlement in a lawsuit brought forth by Robert Julian-Borchak Williams, a black man who was falsely arrested in January 2020 due to an erroneous facial recognition match. The case opends the conversation about the role of technology, specifically facial recognition software, in modern police work.

According to the settlement, the police department will make several adjustments to its usage of facial recognition technology. These changes include prohibiting arrests based purely on facial recognition matches, mandating additional evidence before a suspect is included in a photo lineup, and requiring officer training concerning the limitations and risks of the technology. The department will also conduct an audit of all cases since 2017 where facial recognition technology was used to secure an arrest warrant.

The incident that sparked the lawsuit took place in 2018 when a man stole five watches from a Shinola store in Detroit. Investigators used a photo from the store’s surveillance video and ran it through their facial recognition system, incorrectly matching the suspect to Williams’ driver’s license photo. Despite the notable differences in appearance between Williams and the suspect, authorities proceeded with the arrest.

Williams spent around 30 hours in jail before the mistake was acknowledged. The event led to ongoing emotional distress for Williams and his family beyond the time spent behind bars. He later said, “The scariest part is that what happened to me could have happened to anyone.”

The American Civil Liberties Union (ACLU), who represented Williams in his lawsuit, applauded the settlement. The ACLU considers the adjustments made in Detroit to be among the strictest in the United States and suggests that other law enforcement agencies use them as a template. Williams received a $1,000 payment as part of the settlement.

Many studies have shown that facial recognition systems are more likely to misidentify people of color, especially black individuals, than white people. This bias can be traced back to factors including the lack of diversity in the datasets used to train facial recognition algorithms, along with the inherent limitations of the technology.

The growing use of AI in policing, such as the deployment of live facial recognition cameras in public spaces by the UK police, calls for comprehensive regulation and oversight at a federal level. It is critical that these systems be used responsibly to prevent cases like that of Robert Julian-Borchak Williams from reoccurring.

Leave a comment

0.0/5