Problems of AI for Government
There exists a degree of uncertainty in discussions about AI, driven by popular science depictions of intelligent machines as either possessing human-level intellect or superintelligence. These problems pose interesting thought experiments about how to run and adapt our societies. However, they are not imminent or pressing concerns, even by the admission of their most ardent advocates. This article will focus on the application and ethical concerns raised by AI for government, particularly democracies.
Artificial intelligence has an under-theorised importance within the fields of government and politics, primarily because it is quite hard to engage in non-speculative research in the current, indeterminate state of affairs regarding AI. Its main inroads have been through two emerging spheres of technology – lethal autonomous weapons systems (LAWs) and self-driving cars – due to their imminent legal and ethical difficulties. Questions arise over the legitimacy and responsibility of these actors in lethal scenarios; who is responsible for a fatality involving an autonomous vehicle? Should an autonomous vehicle alter its course if it would harm fewer persons but require hitting another person deliberately? These scenarios have brought traditional political and moral philosophical questions long confined to armchair thinking, such as the Trolley Problem, into sharp modern and practical relevance.
The problems of law and warfare are multifarious, and will no doubt be covered substantially in following articles. Here, I want to sketch an outline of different ethical questions surrounding AI and government, questions that have yet to be substantially theorised, yet that are essential to thinking about the ethical dimensions of this technology.
Ethical Concerns for Democracy
Big data burst onto the political scene in 2015 with the Brexit referendum, and again in 2016 with the Trump campaign. Both used data on a massive scale to identify ‘swing’ voters and aggressively target them using high-intensity, short-burst ad campaigns. The pro-Brexit Leave.eu campaign, through Cambridge Analytica, achieved this by withholding nearly all of its advertising budget until the final week before the vote, and then spending the entire budget in key areas identified by the company. This, it argues, contributed significantly to the result, swinging undecided voters through a deluge of information based solely on their online personalities, harvested by Analytica’s systems and processed to match ad’s directly to them, that would help ensure the desired outcome by the Leave campaign.
The ethical questions this raises for democracy are multifarious. One is a more standard concern about the influence of money in election campaigns, but beneath this there is a more foundational concern about the ethics of this strategy in democratic practise. To what extent does this type of data harvesting breach consumer consent, and corrupt the democratic process? The key takeaway from this incident, which historian Yuval Noah Harari often alludes to in his less melancholic speeches about the future of big data and AI, is that this is an unbalanced deployment of this technology. Currently only centres of governmental power are using these tools to disrupt, survey and influence populations. Technology that is deployed to monitor citizens is widespread and well known, from basic data-gathering tools such as CCTV to the more advanced data-harvesting tools of political consultancies. Technology that monitors governments and corporations is not, and one major new innovative action that could help rebalance this mismatch would be the innovation of democratic tools of governmental accountability.
Another of the major ethical concerns on the horizon for government is the development of a pro-meritocratic discourse in societies demonising what has been called the ‘useless class’ of people arising from automation and AI. Depending on the routes taken by those heading centres of power for AI, a highly meritocratic society that automates large proportions of its working base, while holding recalcitrant views on wealth redistribution schemes such as Universal Basic Income, will find a core social tension deepens between the haves’ and have-nots. Building off the back of contemporary global concerns regarding the sustainability and ethics of current wealth inequalities, particularly in ‘western’ societies, this appears to be the largest hypothetical ethical concern moving forwards into the mid-twenty first century.
Artificial vs Extended Intelligence
This leads to the more speculative, philosophical concerns around the term ‘AI’ itself, which many view itself to be a misnomer, cautiously labeling this emerging field ‘Extended Intelligence’. This is true from the major thinkers and developers in the field to major business and consultancy firms, from Nick Bostrom to Deloitte and EY. This term captures many of the developments of AI in government whilst relating it to the gradual ‘improvement’ of human capabilities too, melding human and machine into one continuous spectrum of ‘being’. The literature on this topic is emerging, encapsulated by articles on the idea of ‘para-personhood’ and fluid identities that go beyond the scope or intent of this small blog post. However, it is worth those interested in business, ethics and good governance to pay attention to, or be aware of, these more experimental and theoretical developments as the technology emerges. Sooner or later, all these fields will have to contend with some form of existential question over the definition of personhood, autonomy, and legal responsibility, and such exploratory pieces help us shape our reactions to these developments in advance of the existing technology. Deloitte’s analysis on the impact of AI in government broaches a useful division between the extended versus artificial intelligence debate, characterising it as two out of the four major facets that AI systems could take in government work; relieving pressure, splitting up and dividing work, replacing workers, and finally augmenting current capacities. The final two neatly divide the more high-end autonomous and cognitive tasks that define AI compared to simple robotic and repetitive programs, with AI carrying the connotation of replacing humans and EI that of augmenting them. These higher-domain issues are the larger concern of AI in government moving forwards, above the brute force calculations of relatively ‘basic’ data processing algorithms like those of Cambridge Analytica.
For now, the concerns and promise of AI in government remain in the former two categories. Thinking about how they will impact society as the latter two come to the fore will be the tasks of business and government for tomorrow. It is the duty of those regulating and implementing these novel, disruptive technologies to grapple with the seriousness of the change they invite to democracy, and how best to go about addressing it. Keeping these concerns in mind while we transition to a more technologically augmented society is a valuable practise for the future of prudent and ethical governance.
Written by Daniel Skeffington