Ever since the death of George Floyd on May 25th, 2020, we have seen a resurgence of the #BlackLivesMatter movement – originally founded in 2013 after the acquittal of Trayvon Martin’s murderer. The global movement has erupted in peaceful protests across the United States (U.S.) and the rest of the world, as people fight for Freedom, Liberation and Justice; as people fight to end State-sanctioned violence, liberate Black people, and end white supremacy forever. The fight for civil rights successfully reached Silicon Valley with majors like Amazon, Microsoft and IBM voicing concerns over ethics in AI. On June 8th 2020, CEO of IBM took the lead in addressing a letter to the U.S. Congress suggesting police reform and proposed nationwide regulations for the ethical use of facial recognition technology.

 

What is Facial Recognition Technology?

In order to understand how this letter helped forge a stronger bond between private sector AI and it’s ethical implementation, let us first define what we mean by ‘facial recognition’ technology and question why there exists an ethical debate on the matter. In simple terms, facial recognition technology uses artificial intelligence (AI) algorithms to compare the image of a human face with a very large database of faces, in hopes of finding a likely match. The technology is already here, you use it to unlock your phone every day. The question is who gets to use it, whose faces are being looked at, and how do we create an unbiased algorithm to identify the most ‘probable match’?

 

How Does Racial Bias Occur?

AI-powered facial recognition technology is already being used by law enforcement and criminal justice systems across the U.S. Police departments use it to analyse CCTV and other video surveillance footage; and judges use it to predict the likelihood that defendants will re-offend in the future. While the intentions for using such technology may appear morally justifiable, problems arise when underlying databases (from which facial recognition technology extracts images and data), are subject to algorithmic biases. One way that racial biases occur, is when databases rely on a limited pool of data made up primarily of jail mugshots, such was the case in a sheriff’s office in Oregon. Moreover, limited databases are problematic to identify when defence attorneys are unable to question or challenge the process that results in suspect identification – as highlighted by Marc Brown, Chief Deputy Defender in Oregon state.

In a 2010 report published by Propublica (an independent organisation conducting investigative journalism), companies like Northpointe were exposed for creating software that produces racially biased risk assessment scores to predict future criminals -used in the American criminal justice system.

 

Prediction Fails Differently for Black Defendants

  WHITE AFRICAN AMERICAN
Labeled Higher Risk, But Didn’t Re-Offend 23.5% 44.9%
Labeled Lower Risk, Yet Did Re-Offend 47.7% 28.0%

Using Northpointe’s risk assessment tool, blacks are almost twice as likely as whites to be labeled a higher risk but not actually re-offend. It makes the opposite mistake among whites – who are far more likely to be classified as lower risk, yet go on to commit other crimes. (Source: ProPublica analysis of data from Broward County, Fla.)

This risk assessment technology is used in Napa County, California, to recommend a probation or treatment plan to a judge. However, Napa County Superior Court Judge Mark Boessenecker points out that these scores should be used with caution. “A guy who has molested a small child every day for a year could still come out as a low risk because he probably has a job,” Boessenecker said. “Meanwhile, a drunk guy will look high risk because he’s homeless. These risk factors don’t tell you whether the guy ought to go to prison or not; the risk factors tell you more about what the probation conditions ought to be.” Whether intentional or not, the use of facial recognition technology can result in scores that are heavily influenced by ones race or the colour of their skin.

A federal study published in December 2019, further highlights that Asian and African American faces were up to 100 times more likely to be misidentified than those of white men, in searches conducted by police investigators. In a paper published by Columbia University, Harcourt concluded that the use of risk-assessment tools will disparately worsen racial disparities in the U.S. criminal justice system, something we simply cannot afford.

For these reasons, the unregulated use of biased facial recognition technology could, and has already, led to supercharged policing that disproportionately impacts people of colour in the U.S.. The Black Lives Matter Movement has rightly shed light on the lack of an ethical code of conduct surrounding the use of these technologies. In an outcry for social justice, the tech industry, as well as civil advocates, have brought the fight to Congress in search of a federal solution.

 

Silicon Valley voices concerns to Washington 

In a letter to the U.S. Congress dated June 8 2020, IBM CEO Arvind Krishna indicated that “IBM no longer offers general purpose IBM facial recognition and analysis software products“. Following suit, Microsoft President, Brad Smith, announced that Microsoft will not sell facial recognition technology to police departments in the U.S. until Congressional action is taken. Similarly, Amazon has placed a one year ban on its facial recognition technology, Rekognition, previously used by a Sheriff’s office in Oregon to track down criminal suspects. Amazon – one of the top 10 biggest corporate lobbyists –spent almost $17 million on federal lobbying in 2019, with Microsoft following closely behind at $10 million.

But many still think this is not good enough. Media outlets like CNN Business have been quick to criticise the time limit imposed by Amazon’s ban as ‘limited and largely temporary’. Nevertheless, we have made progress. Our voices have been heard and will continue to be heard if we stand together, push large companies, and lobby for nationwide government regulation, to ensure the ethical implementation of AI across the U.S. and across the world.

 

Written by Anya Magotra

LEARN MORE ABOUT AI

Checklists and frameworks for ethical AI design in business
While the top ethical issues in AI have been identified and are fairly well-known, we often do not know what this means on a technical level. Our ability to think through the consequences needs to...
Ethical considerations for the use of AI in and around military conflicts
Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into...
Privacy of the Commons: or why you should pay more attention to GDPR and your data privacy rights
Concerns about data privacy are changing   On the Internet, stranger danger is no longer a primary worry as consumers become more savvy, but data privacy is rapidly becoming an urgent issue...
Governance and Accountability in AI
Today, we look back on our last panel series = governance and AI. We hosted Katherine Mayes, the Head of Data Analytics, AI and digital ID at TechUK, Ellis Parry, the Information Commissioner’s...
The role of AI in data analytics and consequences for its ethical impact
The need for data analytics Data analytics is the process of observing, organising and then understanding huge quantities of raw data. With the world as it is becoming ever increasingly reliant on...