As algorithms mature, existing legal models begin to show their age. Autonomous vehicles, or AVs, have become a topic of national conversation that generate equal parts optimism and pessimism about the future of personal transport. How our lives will be shaped by the advent of such technologies will in large part be shaped by how the law adapts to a changing world. The question of who will be responsible, or the problem of accountability, is significant.

 

One of the greatest quandaries that algorithms in general pose is that of accountability, both of companies and the algorithms themselves. As ever, holding large, profitable corporations accountable to Government will always be challenging, but the capacity of some algorithms to make independent decisions catapults them into a newer, murkier world of responsibility. 

 

This algorithmic-level accountability problem is often called the black box problem. Blame and responsibility become increasingly difficult to attribute once autonomous algorithms are used. Such algorithms use advanced machine learning, particularly Deep Learning, to infer patterns from data using layered and complex reasoning (almost on par with human neural networks). These algorithms can make decisions that are difficult to explain, to the point where even the designers themselves cannot understand how an algorithm independently came to its conclusion. This is compounded by the fact that machines cannot as easily as a human give an explanation of their behaviour, as advanced pattern-seeking capabilities is not necessarily accompanied by self-awareness. 

 

The black box problem is clearest in the case of self-driving cars, which raise new questions of liability and responsibility as algorithms become more and more autonomous. When algorithms make decisions independent from human control, the causal chain of responsibility is weakened, and unforeseen circumstances cannot be accounted for in our current legal models.

 

Elaine Herzberg, who was the first person to be killed by an AV serves as a token example of how corporate liability can be severely undermined by the use of algorithms. Herzberg was killed by a self-driving Uber test vehicle in 2018. The investigation found a variety of contributory factors and proximate causes which made attributing blame difficult. These factors include the standard mistakes of both the pedestrian and the driver—the pedestrian had tested positive for drug use and the test driver was watching The Voice on her phone shortly before the collision. 

 

Once algorithms are factored in, more factors are included such as the failure of the car sensors, which were calibrated to not overreact to detected objects (as in previous tests the software had mistakenly recognised shadows and plastic bags as obstacles). Secondly, the failure of Uber safety regulations might also be considered, as it was revealed that the test driver was overworked, alone and qualified only by virtue of a driving license, unlike other companies which had put two or more engineers into test vehicles. 

 

The fact that the car was autonomous muddles the chain of responsibility. Ultimately, Uber was vindicated because it was determined that the test driver should have been paying attention, but if in a hypothetical version of this scenario, perhaps when AVs are more ubiquitous, if a human driver were not present and the car’s algorithm did not make clear why it failed to see the pedestrian, it would be very unclear who to prosecute. One need only imagine the number of people who would be involved in the engineering process to understand the complexity of this problem of responsibility. 

 

Elaine Herzberg’s death makes clear the need for legal accountability to be reframed in the context of autonomous algorithms. The EU wishes to treat liability of AI differently from traditional product and negligence models, with a clearer division of responsibility between designers, manufacturers, service providers and users. How this might work is still very much under review, as a proposal made by the European Parliament to establish a legal status of electronic persons for sophisticated robots as in the case of corporations, was ultimately rejected by the European Commission in 2018. Clearly progress is being made, but more will have to be done.

 

Written by Stephanie Sheir

 

LEARN MORE ABOUT AI

The duality of AI and crime
Since the dawn of humanity, people have compensated for their inherent physical and mental limitations by using tools to enhance their abilities. The applications of these depend on the intentions...
AI in Recruitment: Who’s reading your CV?
You do not need to have finished your studies to realise it: internships and job positions are incredibly competitive. Around the world, students and graduates are trying to be offered a place...
Has The 4th Industrial Revolution Already Happened? Businesses & AI Implementation In 2020
Industry 4.0: Disruptions mainly in data-driven industries   Termed the fourth industrial revolution, or Industry 4.0, AI implementation has become common in the last few years. According to a...
AI – A weapon against the climate crisis?
There is no use denying it, climate crisis is real. The Anthropocene, said to be a new geological epoch in which man is the main actor on all natural changes, the period we are living in is scary. ...
Ethical use of Deep Learning in Healthcare as the gold standard
In this article, we are going to cover some of the ethics, ‘good’ and ‘bad’ related to the use of AI in healthcare as well as the challenges current infrastructures present to the deployment of...