In light of potential safety, mobility, and environmental benefits, autonomous vehicles (AVs) are often hailed as socially and ethically desirable technology. Yet beyond this optimistic appraisal, self-driving AI is plagued with ethical concerns. On top of the accident scenario dilemmas like choosing between pedestrian and passenger lives which have received ample attention in the media, the technology is exploitable, encourages risk-taking behaviour, and is not sufficiently developed to account for blind spots and moral nuances.

The exploitability of data-reliant technology

As with other core businesses like retail and entertainment, the data commodification of AVs can be profitable for manufacturers. Thus, the dependency of self-driving technology on data results in a highly exploitable analytics device with promising potential for commercialization. In what could be described as “big transportation data”, AVs may flood the corporate ecosystem with information that could have unforeseeable consequences in commercial and social realms. Think of an invasive technology that is aware of your usual driving routes, destinations, and behavioural patterns. Transportation would become just another avenue through which corporations can influence how people consume and even how they behave.

Incentivising riskier driving

While most efforts in the development of AVs have been directed at creating more efficient algorithms for traffic safety, the response of human drivers to an AV environment has largely been overlooked. Namely, with increasing exposure to AVs on the road, human drivers could be less likely to exercise precaution due to the notion of greater safety. “Human drivers perceive AVs as intelligent agents with the ability to adapt to more aggressive and potentially dangerous human driving behaviour”, creating a moral hazard.

The introduction of AVs in the transportation system also complicates ethical and legal considerations by adding more agents to the mix such as hardware and software manufacturers. Thus, to mitigate moral hazard and regulate traffic, lawmakers aim to capture the complex interactions amongst the players involved. Research teams have found that a game-theory-based liability policy would be effective at reducing driver complacency and managing AV manufacturers’ assessment of traffic safety in relation to production costs.

Blind spots

Another issue with AVs is when they learn from training data sets that do not match reality. The AI systems powering driverless cars are trained extensively with virtual simulations although sometimes an unexpected error in the real world should but does not alter the car’s behaviour , exposing a blind spot in the programming. At a large scale, these faults could have devastating effects and so their abundance and severity should be accounted for when considering whether or not the technology in its current state is suitable for the transportation system. 

In order to minimize blind spots, MIT researchers have developed an approach where following simulation training, human drivers can provide error signals when the system’s actions are deemed unacceptable. The feedback from different drivers for similar events is then compiled and categorized as either acceptable or unacceptable and certain events can be labeled as blind spots accordingly. 

With increasing

exposure to AVs on

 the road, human drivers

could be less likely 

to exercise precaution

due to the notion

of greater safety ” 

A lack of moral nuance

The simplistic approach currently being used to address the ethical considerations of AV is inadequately designed to account for circumstantial differences and moral nuances. Researchers propose using an Agent-Deed-Consequence (ADC) model as a framework for making moral judgements based on intent, action, and outcome. This approach would allow the AI to have similar flexibility and stability as human moral judgement. Moreover, intent is an important distinction since vehicle terror attacks are highly effective, difficult to prevent, and are becoming more common. As a result, vigorous testing with driving simulation studies and better protocols should be implemented to prevent the use of self-driving technology with malicious intent and better assess the morality of traffic scenarios involving AVs.


AVs add further dimensions to issues like data privacy and legal and moral questions in transportation. As a disruptive technology, this is natural and so it is more of a question of how to advance and streamline its integration. Ultimately, to more accurately engage with and understand the phenomenon of AV ethics, attention should not focus only on accident-type dilemmas, but rather on weighing the challenges relating to the design, capacity, limitations, and societal impacts of AVs.


Written by Miguel Larrucea Long 


A Layered Approach to Regulating AI
A Layered Approach to Regulating AI   As technologies emerge and mature, the need for regulating AI grows. Artificial intelligence, increasingly an established part of our lives, is no...
The role of AI in data analytics and consequences for its ethical impact
The need for data analytics Data analytics is the process of observing, organising and then understanding huge quantities of raw data. With the world as it is becoming ever increasingly reliant on...
Operationalising AI Ethics in 2021
Discussing the ethical implications of AI and how to regulate them has been the central topic of our conference for the past 4 years, but we have never really defined what we mean by ethics! In our...
The Black Box Problem: accountability in the age of the adolescent algorithm
As algorithms mature, existing legal models begin to show their age. Autonomous vehicles, or AVs, have become a topic of national conversation that generate equal parts optimism and pessimism about...
National AI Strategy: a race to global AI leadership?
Developing national AI strategy: the race to global AI leadership   In the last few years, countries have focused on national AI strategy in a race to both take advantage of AI and prepare for...