Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into the Vietnamese foliage were seismic detectors, attuned to minute vibrations in the ground; microphones, listening for guerrilla footsteps; and even olfactory sensors, sniffing out ammonia in human urine. Data collected from these gadgets were transmitted to computers and in minutes, navy warplanes would obliterate the algorithmically-ordained grid squares. Operation Igloo White was the beginning of the Global Arms Race.

 

Since then, the goal of increasing the ability of weapons systems to detect objects – through humans and computers working symbiotically – has underpinned military thinking across the world’s biggest powers. Electronic warfare has burgeoned and morphed into operations relying on semi-autonomous drones and object-detecting AI. But these methods of collecting and processing data from sensors, using algorithms fuelled by increased processing power and acting on the output quicker than the disadvantaged foe don’t come unloaded with ethical questions.

 

Automation bias and false positives

 

When decisions are handled by automated intelligence, and the human actor is largely present to monitor on-going tasks and execute recommendations, automation bias is likely to occur. At the heart of the problem is the way that machine learning (ML) and computation work – fundamentally by learning from large sets of data. An assumption of ML is that, when projected forward, things won’t be radically different. The NSA’s SKYNET program is a great example of this automation bias and its dangers.

First touted as a learning algorithm, Skynet was a surveillance program that used meta-data from cell phones to create profiles of suspected terrorists in Pakistan. It relied on what’s called an “analytic triage”, an 80-point analytical score that was used to calculate an individual’s probability of being a terrorist. When the surveillance data highlighted a red flag in its algorithm, a human fired the missile upon the software’s recommendation – this is known as a “signature strike”. According to the Bureau of Investigative Journalism, the U.S. military has carried out “hundreds” of drone strikes in Pakistan since 2004. Justifiably, when documents on SKYNET were leaked in 2013, public uproar surrounded the fact that these targets (amounting to a death toll anywhere between 2,500 and 4000) may in fact have been laden with innocents.

So what could the innocent deaths have been attributed to? The first straight-forward answer has to do with the small sample size. Patrick Ball, director of research at the Human Rights Data Analysis Group, highlights that “There are very few ‘known terrorists’ to use to train and test the model”. Indeed, reproducibility requires large data sets in order to draw patterns and create templates. A flaw in how the NSA trains the algorithm to analyse cellular metadata would make the results unsound.

The second possible answer is that the program’s chance for false positives was high. The NSA stated that the rate of false positives was of 0.008%, some of the NSA’s tests even saw higher error rates of 0.18%, which when applied to the 55 million citizens the NSA has targeted (around a third of Pakistan’s population)leaves room for 99,000 people to be potentially wrongly labelled as a terrorist threat. In fact, the Al-Jazeera Islamabad Bureau Chief was labelled by the program as being a “confirmed member of Al-Qaeda” due to his visits to sensitive locations and phone calls with known extremists as part of his research. There is no doubt that since then SKYNET’s technology has been refined and redacted but the issue remains that false positives and automation biases are likely to occur. And in an environment where civilians, amongst them children, vulnerable groups and journalists’ lives are interwoven with those of extremists, and where cultural differences are poorly understood, innocent lives run the danger of being lost.

 

The big players and their fully-autonomous weapons

 

Earlier this month, the Pentagon’s Joint Artificial Intelligence Center requested $268 million, roughly triple the figure from the previous year. The DARPA AI Next campaign alone receives upwards of a $2 billion investment yearly in new and existing programs to create the third wave of AI technologies. This is certainly a riposte to China’s reported $12 billion investment on AI systems in 2017. Other countries such as the UK and Russia rolled out similar plans with hefty budgets (the UK approved a £2.5m project for “drone swarms” alone). The race for the most advanced and precise use of AI in the military is evidently aggressive and lethal-autonomous weapons (LAWs) are likely to be the natural progression.

When spelled out, the chief ethical concern about LAWs echoes Kantian ethics. It is the claim that all persons are owed respect by virtue of them being rational persons, even in war. To make life-and-death decisions absent that relationship is to subject human beings to an impersonal and predetermined process which lacks respect and morality. Conscious of that the United Nations holds conventions on Conventional Weapons(CCW)in hopes to thwart or at least regulate the development and deployment of LAWs. Players like the UK, Australia, Israel, Russia and the US are unsurprisingly, forcefully against legal regulation while the rest of the world support either a total ban or strict legal regulation.

 

White noise, nuissance and nuance

 

The argument given by pro LAWs key players is that fully autonomous weapons have the potential to make wars more just. They hold that LAWs can reduce collateral damage by making attacks more precise and in many cases deter war altogether. LAWs, as they argue, bypass the excesses and caprices of humans, who are capable of committing war crimes, deliberately targeting innocents and harming people even after they’ve surrendered. The Pentagon’s AI strategy published in February says that the technology will be used to “protect against civilian casualties and unnecessary destruction around the world.” But these arguments have a soft underbelly. For one, “deterring war” would be the result of intimidation. The nation standing against the militarised AI superpower would fear its own devastation and cave to the coercion of its opponent. This could in turn lead to economic and resource exploitation. Second, LAWs could make it easier to kill. Video games where users instantiate the shooting of targets behind a screen have shown that the lack of immediacy causes desensitisation – LAWs take that a step further. Machines will systematically target individuals that fit the precise profile, but unlike humans, they won’t be able to spare children who are being used in wars.

These arguments about making wars fairer(it should be noted that we find this turn of phrase oxymoronic, for it is akin to saying that one can make fire more cold, or deforestation more “green”) distract from the more straightforward case for LAWs – they are pragmatic. Developing LAWs would mean that drones would no longer need to transmit and receive information, which creates a lag time and limits where they operate. It would eliminate the need for a human to drone bijection, allowing for thousands, or tens of thousands, of LAWs at once in the sky. And it would also allow for quicker interventions. Ultimately, the proliferation of LAWs means that killing would become cheap.

If it isn’t realistic to stop military conflicts in the first instance and the development of autonomous weapons in the second, what ethical constraints could we put in place? The recommendation made by the ICRC and the more realistic answer to the debate is that actors in armed conflict must pursue a human-centred approach to the use of ML and AI. The human/machine symbiosis we mentioned at the beginning must be redefined to center around mistakes and bias identification. As humans and machines make different kinds of mistakes, the capabilities of both entities must be used to recognise the other party’s shortcomings and errors. Human intelligence should be invoked where decisions that have serious consequences for people’s lives are at play and to ensure the compliance with humanitarian laws. The ICRC writes that “these systems may need to be designed and used to inform decision-making at human speed, rather than accelerating decisions to machine speed, and beyond human intervention”. The onus is therefore on the engineers, programmers, data scientists and military key players to work around such constraints in the use and design of these weapons. They ought to tend toward ensuring transparency in the inputs fed to the machine learning AIs, complete reliability in the software and making allowances for automation bias and false positives. General principles will need to be supplemented by specific principles, guidelines or rules for the use of AI and machine learning for specific applications and in particular circumstances.

 

Palliative AI at the fringes of war

 

Finally, the picture would not be complete without the mention of AI in the use of humanitarian aid to alleviate crises caused by protracted wars. Researchers believe that we can harness AI to analyse the complex information from potential war zones to predict where peacekeeping efforts should be focused. The United Nations, World Bank, International Committee of the Red Cross, Microsoft Corp., Google and Amazon Web Services recently announced an unprecedented global partnership to prevent future famines. With support from leading global technology firms, they are launching the Famine Action Mechanism (FAM)— the first global mechanism dedicated to preventing future famines. The initiative will use the predictive power of data to trigger funding through appropriate financing instruments, working closely with existing systems.

Today, 124 million people live in crisis levels of food insecurity, requiring urgent humanitarian assistance for their survival. Over half of them live in areas affected by conflict. AI and ML hold huge promise for forecasting and detecting early signs of food shortages, like crop failures, droughts, natural disasters, and conflicts. AI systems could measure the availability of electricity, in the wintertime, or cross reference population growth with GDP growth to make predictions about mass migrations. They could even look at other factors such as the food production index, which is a weighted average of food crops that are considered edible and that contain nutrients. Datasets such as announcements by government bodies that will affect the nation’s currency in the long term or military coups can also allow data scientists to adapt the prediction to cater for new developments. The FAM softwares could also suggest remedies to an impending crisis, such that, in the event that it predicts imminent famine based on historic data and the climate change trajectory, it could recommend alternative farming methods or alternate food sourcing that will cater to a changing climate. The vision is that of efficiency gains through the deployment of swift humanitarian aid and its funding through pre-planned donations and budget allocation.

 

Written by Nada Fouad

 

LEARN MORE ABOUT AI

Has The 4th Industrial Revolution Already Happened? Businesses & AI Implementation In 2020
Industry 4.0: Disruptions mainly in data-driven industries   Termed the fourth industrial revolution, or Industry 4.0, AI implementation has become common in the last few years. According to a...
The role of AI in data analytics and consequences for its ethical impact
The need for data analytics Data analytics is the process of observing, organising and then understanding huge quantities of raw data. With the world as it is becoming ever increasingly reliant on...
Ethical considerations for the use of AI in and around military conflicts
Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into...
Ethical use of Deep Learning in Healthcare as the gold standard
In big data, Holy Grail = actionable insights. Which scattered pieces of data can we find to feed algorithms in order to extract valuable information, create patterns and improve efficiency and...
Universal, ethical and unbiased AI
No longer the exclusive domain of programmers at Google or Microsoft, the use of artificial intelligence is becoming more and more widespread as it finds greater purchase across all sectors of the...