Discussions around artificial intelligence typically raise the longer term question; are these agents persons, and if so, should we give legal rights to AI? Given the EU Parliament has adopted preliminary legislation addressing this concept in its 2017 Civil Law Rules on Robotics, its relevance is more immediate than some might imagine. This short article sketches some of the pros and cons of attributing legal rights to AI as persons, for those intrigued by the future prospects of machine personhood.
Legal Rights for AI?
The question about legal rights for AI often emerges from talk of human-like machines. Questions of personhood are fundamentally questions of agency, continuity and identity. James Digiovanna’s work on artificial identity explores several problems fundamental to this field, from the social and legal perspectives. How do we hold actors to account if they are able to fundamentally rewrite their behaviour in mere seconds? Does this make them different persons? Are such ‘para-persons’, who can blend identities with others and reform personalities on a whim, even capable of responsibility for their actions? Do they even have reason to act morally? A such, if robots come to emulate human behaviour, at what point, if any, should they be afforded legal rights?
Giving AI Rights
Legal rights for AI are a little different to rights per se. As Bryson notes, historically only a ‘small subset’ of humans have been granted the privilege of legal personhood. The granting of rights to persons is therefore a privilege rather than a necessity, a claim she is keen to highlight. AI, for all its seemingly human attributes, is always designed as a tool for human use. Even machines, like DeepMind’s goal for artificial general intelligence, have a wider aim to fulfill a role in a functioning human society. For us to create them, we must have purposes and limitations in mind. Yet, with an ever-increasing number of companies and academics supporting initiatives to create artificial general intelligence and machines indistinguishable from (or greater than!) humans, how should we begin thinking about regulating legal personhood for AI. More worryingly, does this even make sense to do…
Bryson argues strongly against giving robots rights from this instrumental perspective. Rather, she insists they are ‘a scalar concept, so that an entity can be more or less of a legal person as it possesses more or fewer rights and obligations’. As such, we can afford them greater or lesser rights depending on their role, and adjust accordingly, keeping in mind the AI’s purpose in serving humans. It is clear that any development in Artificial General Intelligence is predicated on the ‘good life’; that is, that basic human desire for each individual to achieve a good existence. This is a central tenet of liberalism; that each person is indivisible and functions as a single rational agent in the world. This coherence breaks down when confronted with current and potential developments in technology, pertaining to para-persons and ‘Honeycomb’ AGI. Such ‘individuals’, capable of merging their identities with other beings and splitting apart, or radically rewriting themselves so as to change most, if not all, of their original personality traits and goals, are not subject to the traditional personhood coherence that we humans are. Indeed, there is little reason to suspect that human beings will retain such coherence if they are able to replace and rewrite parts of their own physical and mental capacities, posing the question ‘what will the concept of legal responsibility and personhood in international law look like under these future parameters?’
Some Fundamental Issues
Bryson’s focus may well be on the more ‘regular’ legal rights for AI and robotic systems, to which I also find it problematic attributing legal personhood or rights without clear and obvious reasoning. However, for systems with being-like properties, indistinguishable from homo sapiens in many or all respects, or indeed surpassing them, it is bold to assume our legal system is even relevant to them, or will survive in its current form. More likely these new beings will rewrite the fabric of the legal system with their own wills, built around their understanding and limitations of power. Companies such as DeepMind have already set their sights on achieving these goals. As Bryson notes, it is ‘perfectly possible’ to afford them legal personhood, but as the ‘adults in the room’ it ought to be us who decides if, and when, this ought to be done, and the purpose it serves for humanity.
The general argument is rooted in an assumption that the law is created for us, human beings, by us, and acts or responds according to our best interests. But what if human beings are no longer the ones with equal power status in a system of new, artificial agents? Some of the more radical futurists in the field would argue such machines, possessing the powers of a superintelligence far in excess of a human being, would hold the greatest intellectual power on the planet. As God-like machines, the power to make law may rest, ultimately, with them, placing our most potent decision making power in their hands. At this point, deciding who controls legal rights for AI may well have been withdrawn from the mere human domain, leaving the question a moot point.
Here is where Bryson’s case faces its long term challenge. The law, by and large, emerges from the balance and spectrum of power in a given world order, alongside human norms stemming from our essential similarity as entities. Thus, when Bryson asserts that the paradigmatic question here is ‘Does endowing robots with this legal right or that legal obligation further the purposes of the legal system?’, she misses the foundational point. Legal systems are created by us, but from the sense of power parity between legal persons, on a relativistic scale. They emerge from the relative closeness of our power to one another, and our relatively equal comprehension of morality, even if they afford legal rights to AI. The dangers of AI emerging that swiftly alter this system’s foundations, without proper forethought, are serious enough to no longer be regarded as fantasy. Pursuing ‘emotional of economic appeal’ of legal rights for AI persons at the expense of the system is a path that ought to be avoided, for the integrity of the social system as a whole.
If this leaves you with a twang of despair hanging over the long term prospects for AI, perhaps I can comfort you as we draw to a close. As alluded to in prior articles here on the AIBE blog, the most important aspect when it comes to AI is how it is developed and deployed. The manner in which it is created is every bit as important as this later stage of implementation, for a new arms race in the sector could spell disaster if it were to concentrate power in too few or too Machiavellian actors. Our task moving forward is to develop these systems, whether they require legal rights or not, with an eye on how they serve us, and establish their rights accordingly. More than this, we must ensure our efforts are as collaborative as possible, to minimise the risks of ‘break-out’ AI escaping half-baked from rogue laboratories in the depths of superpower military zones, and keep the ability of legal regulation safely in the hands of their human creators.
Written by Daniel Skeffington