Our perceptions shape how we experience reality and as what we see online increasingly becomes a frame of reference for what we perceive, the potential harm of misleading material in the media grows larger than ever. Moreover, the broadening applications and capabilities of technologies that facilitate content manipulation increase in parallel with their exploitability and the intensifying distrust of consumers in the information they acquire digitally.
In 2019, months of misinformation and a suspect “Deepfake” video of Gabon President Ali Bongo delivering his New Year’s address sparked an attempted coup which resulted in the deaths of 2 Gabonese soldiers and arrests of 8 others. At a time when people are so reliant on the web as a source of knowledge and “truth”, misinformation and the skepticism that Deepfakes provoke can and did have devastatingly real consequences.
What are Deepfakes and what is the danger?
A Deepfake is AI-generated fake digital content. Think of a counterfeit video of Barack Obama giving a public service announcement that sounds and looks almost identical to the real thing. Due to being highly realistic, and capable of rapid and large-scale distribution, Deepfakes have enormous potential to mislead and manipulate, making them particularly dangerous. It comes as no surprise that fake audio and video content is ranked by experts as the most worrying use of AI in terms of applications for crime and terrorism.
“Synthetic voices may
inflame racial and cultural
impact or diminish
officials, and replace
The downsides of Deepfake technology
As with many technologies, the advancement of Deepfake AI has generated as many if not more opportunities for exploitation than it has for value-adding pursuits. For instance, Deepfake technology could facilitate financial crimes such as market and stock manipulation. Picture Jim Farley, CEO of Ford, suddenly announcing his retirement in a highly convincing fake video that triggers a stock dive. Or even worse, due to growing distrust of digital media, investors become less responsive to unexpected news relayed online, disrupting stock price fluctuations and trading mechanisms.
Deepfakes can also produce strikingly realistic human-like audio raising several ethical concerns. Namely, synthetic voices may inflame racial and cultural biases, inadvertently impact or diminish genuine interpersonal interaction, impersonate officials, and replace certain jobs. Even text is no longer completely safe with more advanced deep learning language models like GPT-3.
An underlying drawback of Deepfakes is how they undermine people’s confidence in the authenticity of information disseminated online across the board, resulting in a phenomenon termed “reality apathy”.People are so overwhelmed by endless waves of misleading information that they gradually lose trust and become numb to everything they hear and see on the web. They are left in a limbo of sorts where they are uncertain of what to believe and what to disbelieve, meaning it is equally critical to prove that certain material is genuine as it is to show other content is fake.
The redeeming qualities of Deepfake AI
Although the drawbacks of Deepfakes certainly receive more attention in the media and across studies, a compelling case can be made for the benefits and opportunities they create. Used ethically, the technology broadens the possibilities of art, provocation, and expression. As artist-activist Barnaby Francis eloquently puts, “It’s the perfect art form for these kinds of absurdist, almost surrealist times that we’re experiencing.”
In the entertainment industry, Deepfakes could enable the ‘reanimation’ of the voices and appearances of ill or deceased actors, and more natural dubbing. One might recall an advertisement of David Beckham seemingly speaking nine different languages in a 2019 malaria campaign. Businesses can similarly benefit from a more versatile e-commerce and advertising environment with greater scope for creativity and persuasion. From a scientific perspective, Deepfakes facilitate the detection of abnormalities in x-rays, and the creation of virtual chemical molecules to speed up materials science and medical discoveries.
What should happen with Deepfakes? What is the ethical course of action?
Given the offset in the benefits and costs of Deepfake technology, finding a balance becomes an ethical conundrum. Would it be morally acceptable to prohibit or curb the advancement of a technology that could help discover life-saving medicine? How can the advantages and drawbacks be measured and compared? With so many factors at play, striking a socially optimal equilibrium would undoubtedly be a challenge. However, appropriate implementation of education, corporate policy, and Deepfake countermeasure technology would be a step in the right direction.
Greater focus on critical thinking and digital literacy in education would contribute to people’s ability to discern misinformation and fakes, mitigating the “reality apathy” effect. Businesses and particularly social media corporations should also enforce ethical policies to regulate content manipulation. Twitter’s polls show consumers support increased controls with 9 out of 10 respondents saying warning labels should be placed next to significantly altered content and 75% believing accounts that share misleading altered media should face enforcement action.
believe that accounts that share misleading altered media should face enforcement action.
Additionally, several Deepfake countermeasure technologies show promise. On the detection route, developments include deep neural network architectures that can spot altered media by the pixel and AI algorithms that can analyse imperfections unique to the light sensors of specific camera models or even blood flow as indicated by subtle changes on a person’s face. Research on the digital watermarking route has made progress on replacing the typical photo development pipeline with a neural network which is extremely sensitive to manipulation and inserting “noise” in media imperceptible to the human eye that prevents the material from being used in automated Deepfake software. Although the research is encouraging, it is uncertain whether these projects will be effective in the long run as the cat-and-mouse game between efforts to detect and conceal Deepfakes is likely to function as an evolutionary struggle with each side continually developing more advanced mechanisms to overcome the other.
The big picture
In many ways, the functions and implications of Deepfakes is a microcosm for the larger social issue of the ethics of technology. The tools themselves are not good or evil; that is a product of how they are used. In the case of Deepfakes, an industry-wide commitment to regulation, corporate ethics, and education would go a long way in mitigating the immediate issues presented by highly exploitable tech design.
This could even be seen as overly cautious as some may argue that the ramifications of Deepfake AI are overdramatized in the media and online. Although possibly true, instances like the coup in Gabon which had lasting consequences should be seen as a clear warning sign of a potentially highly dangerous technology. It would be in our best interests to act in precaution of further exploitation and mitigate the uncertainty and skepticism that Deepfakes create.
If anything, we can learn from Deepfakes. They teach us that the information we gather online whether it be audio, photo, video, or even text, can and should no longer have the same command over our perceived realities.
For a more comprehensive analysis of the benefits, threats, and management of Deepfake AI, I direct you to a detailed Review of the Emergence of Deepfake technology by Mika Westerlund.
Written by Miguel Larrucea Long