With documentaries such as ‘The Big Hack’ or the infamous ‘Social Dilemma’, which shed light on big tech companies’ practices to profit off users’ data, internet users have grown more aware of how their data and cognition might be used against them. AI models primarily facilitate this process. On the one hand, some business aficionados respect how these tech companies turned their business to be profitable. On the other hand, some think that models that are meant to exploit users aren’t ethical and shouldn’t be used. How about governments? A government has the authority to use any technology, such as Facial Recognition.
 

In September 2019, some researchers wrote to Wiley to retract a scientific paper that trained algorithms to distinguish Uyghur people’s faces, a Muslim minority in Northwest China. The scientific paper, published in 2018, studied how facial recognition technology can distinguish between Uyghurs people and Tibetan people. For context, China had been condemned for its discriminatory actions, such as its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang. China has used public surveillance footage to identify Uyghurs. The international community has also been opposed to using AI to discriminate against groups. However, people were also disturbed to see such a study published in a US Journal such as Wiley. People also found troubling the fact that there is no regulation around AI research applications from an ethical point of view, which prompted international scientists to urge researchers to avoid working with firms or universities linked to unethical projects, re-evaluate how they collect and distribute facial-recognition data sets, and keep in mind the ethics of their studies.

Ethical thinking is a rising topic in the scientific community, but many don’t abide by it yet. Nature has reportedly asked 480 researchers worldwide for their views on ethical questions about facial recognition, and, while some are concerned, others aren’t. According to their survey, when asked about what permissions researchers need to use online images, about 20% answered they could use any photo online without consent. Similarly, when asked whether it was ethical to do facial-recognition research on vulnerable populations that might not give informed consent, such as the Muslim population in western China, over 20% answered it was ethically acceptable as long as they give consent. This survey showed that, although some researchers worldwide are concerned about ethics, researchers aren’t unanimous when it comes to ethical AI.

%

of researchers answered that they would use any photo online without asking consent

Last summer, the CEO of IBM Arvind Krishna had stated in a letter to Congress that IBM will no longer offer, develop, or research facial recognition technology. He also mentioned that the firm “condemns and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with [their] values”. Ethical AI activists urge other companies to abide by the same principles as IBM as facial recognition technology can significantly impact our intrinsic feeling of freedom and privacy.

AI models that are trained on data such as machine learning and deep learning models, are a reflection of reality and the dataset they were trained on. Datasets are ultimately going to contain bias. For instance, if a model was trained on what was written on the web, the model would produce prejudices against women and black people. So far, there is no way to process data to remove bias on big datasets.

 As Lily Hu, a PhD candidate at Harvard, stated : “That’s just a fundamental problem of machine learning. Machine learning works on old data [and] on training data. And it doesn’t work on new data, because we haven’t collected that data yet.” Not only the data can be biased, but also the designer of the model might have a questionable view of the world. Ultimately, it is the machine learning engineer who decides what an acceptable level of accuracy is for different groups of people. This leads to AI being implicitly biased, but what are the consequences?

Will some areas of AI be banned at some point in the future?

AI is probably one of the most incredible tool humans invented, and it is still in its early-stage. Consequently, there hasn’t been much regulation on research output but the international scientist community is starting to realize the ethical problems that arise with AI. Facial-recognition technology raises questions about the ethical use of internet images and the purpose of the technology. Additionally, AI and implicit bias are intertwined and we don’t have a way to clean the dataset from possible bias. As Maria Axente mentioned in her talk with AIBE, AI research output should be regulated on an ethics point of view at every step of the process. This way we can ensure AI positively contributes to the betterment of our society.

 

Written by Wassim Boutabratine

LEARN MORE ABOUT AI

The EU White Paper on AI: The Limits of a Risk-Based Approach
Following Urusla von der Leyen’s  comments on the need for a “coordinated European approach to the human and ethical implications of AI '',  the Commission has unveiled the EU white paper on AI. It...
AI in Sports: Game-changing or Unnecessary?
If you tried to answer our "Do you know enough" quiz about the growing presence of artificial intelligence in the field of sport, you have already had a glimpse of what it could become in the years...
AI in Business and Finance: The Latest Addition To Creative Destruction
Joseph Schumpeter’s theory of creative destruction was first thought up in the mid-20th century. It was a theory that looked at how the continuous creation of technology effectively ‘destroys’...
Checklists and frameworks for ethical AI design in business
While the top ethical issues in AI have been identified and are fairly well-known, we often do not know what this means on a technical level. Our ability to think through the consequences needs to...
GPT-3 : The paradigm shift in AI
OpenAI released its beta version of GPT-3 (Generative Pre-trained Transformer 3) on June 11th, its brand new autoregressive language model that uses deep learning to produce human-like text. Since...