With documentaries such as ‘The Big Hack’ or the infamous ‘Social Dilemma’, which shed light on big tech companies’ practices to profit off users’ data, internet users have grown more aware of how their data and cognition might be used against them. AI models primarily facilitate this process. On the one hand, some business aficionados respect how these tech companies turned their business to be profitable. On the other hand, some think that models that are meant to exploit users aren’t ethical and shouldn’t be used. How about governments? A government has the authority to use any technology, such as Facial Recognition.

In September 2019, some researchers wrote to Wiley to retract a scientific paper that trained algorithms to distinguish Uyghur people’s faces, a Muslim minority in Northwest China. The scientific paper, published in 2018, studied how facial recognition technology can distinguish between Uyghurs people and Tibetan people. For context, China had been condemned for its discriminatory actions, such as its heavy surveillance and mass detentions of Uyghurs in camps in the northwestern province of Xinjiang. China has used public surveillance footage to identify Uyghurs. The international community has also been opposed to using AI to discriminate against groups. However, people were also disturbed to see such a study published in a US Journal such as Wiley. People also found troubling the fact that there is no regulation around AI research applications from an ethical point of view, which prompted international scientists to urge researchers to avoid working with firms or universities linked to unethical projects, re-evaluate how they collect and distribute facial-recognition data sets, and keep in mind the ethics of their studies.

Ethical thinking is a rising topic in the scientific community, but many don’t abide by it yet. Nature has reportedly asked 480 researchers worldwide for their views on ethical questions about facial recognition, and, while some are concerned, others aren’t. According to their survey, when asked about what permissions researchers need to use online images, about 20% answered they could use any photo online without consent. Similarly, when asked whether it was ethical to do facial-recognition research on vulnerable populations that might not give informed consent, such as the Muslim population in western China, over 20% answered it was ethically acceptable as long as they give consent. This survey showed that, although some researchers worldwide are concerned about ethics, researchers aren’t unanimous when it comes to ethical AI.


of researchers answered that they would use any photo online without asking consent

Last summer, the CEO of IBM Arvind Krishna had stated in a letter to Congress that IBM will no longer offer, develop, or research facial recognition technology. He also mentioned that the firm “condemns and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with [their] values”. Ethical AI activists urge other companies to abide by the same principles as IBM as facial recognition technology can significantly impact our intrinsic feeling of freedom and privacy.

AI models that are trained on data such as machine learning and deep learning models, are a reflection of reality and the dataset they were trained on. Datasets are ultimately going to contain bias. For instance, if a model was trained on what was written on the web, the model would produce prejudices against women and black people. So far, there is no way to process data to remove bias on big datasets.

 As Lily Hu, a PhD candidate at Harvard, stated : “That’s just a fundamental problem of machine learning. Machine learning works on old data [and] on training data. And it doesn’t work on new data, because we haven’t collected that data yet.” Not only the data can be biased, but also the designer of the model might have a questionable view of the world. Ultimately, it is the machine learning engineer who decides what an acceptable level of accuracy is for different groups of people. This leads to AI being implicitly biased, but what are the consequences?

Will some areas of AI be banned at some point in the future?

AI is probably one of the most incredible tool humans invented, and it is still in its early-stage. Consequently, there hasn’t been much regulation on research output but the international scientist community is starting to realize the ethical problems that arise with AI. Facial-recognition technology raises questions about the ethical use of internet images and the purpose of the technology. Additionally, AI and implicit bias are intertwined and we don’t have a way to clean the dataset from possible bias. As Maria Axente mentioned in her talk with AIBE, AI research output should be regulated on an ethics point of view at every step of the process. This way we can ensure AI positively contributes to the betterment of our society.


Written by Wassim Boutabratine


Operationalising AI Ethics in 2021
Discussing the ethical implications of AI and how to regulate them has been the central topic of our conference for the past 4 years, but we have never really defined what we mean by ethics! In our...
AI and Climate Change: An Overview
As we continue to face serious environmental challenges, it is increasingly tempting to imagine a innovative relationship between AI and climate change. There are a growing number of results...
AI and Grand Strategy. False Promise or Whispered Dream?
Eschewing the typical approach of past articles, this piece takes a look at AI and grand strategy through the central contention of Goliath, a new book on war and politics by former paratrooper and...
AI – a necessary companion in the global energy transition
The energy revolution is upon us Since the Paris Climate Agreement in 2015, nations around the world have been updating their national carbon reduction goals to match the increasing...
AI – A weapon against the climate crisis?
There is no use denying it, climate crisis is real. The Anthropocene, said to be a new geological epoch in which man is the main actor on all natural changes, the period we are living in is scary. ...