In light of potential safety, mobility, and environmental benefits, autonomous vehicles (AVs) are often hailed as socially and ethically desirable technology. Yet beyond this optimistic appraisal, self-driving AI is plagued with ethical concerns. On top of the accident scenario dilemmas like choosing between pedestrian and passenger lives which have received ample attention in the media, the technology is exploitable, encourages risk-taking behaviour, and is not sufficiently developed to account for blind spots and moral nuances.

The exploitability of data-reliant technology

As with other core businesses like retail and entertainment, the data commodification of AVs can be profitable for manufacturers. Thus, the dependency of self-driving technology on data results in a highly exploitable analytics device with promising potential for commercialization. In what could be described as “big transportation data”, AVs may flood the corporate ecosystem with information that could have unforeseeable consequences in commercial and social realms. Think of an invasive technology that is aware of your usual driving routes, destinations, and behavioural patterns. Transportation would become just another avenue through which corporations can influence how people consume and even how they behave.

Incentivising riskier driving

While most efforts in the development of AVs have been directed at creating more efficient algorithms for traffic safety, the response of human drivers to an AV environment has largely been overlooked. Namely, with increasing exposure to AVs on the road, human drivers could be less likely to exercise precaution due to the notion of greater safety. “Human drivers perceive AVs as intelligent agents with the ability to adapt to more aggressive and potentially dangerous human driving behaviour”, creating a moral hazard.

The introduction of AVs in the transportation system also complicates ethical and legal considerations by adding more agents to the mix such as hardware and software manufacturers. Thus, to mitigate moral hazard and regulate traffic, lawmakers aim to capture the complex interactions amongst the players involved. Research teams have found that a game-theory-based liability policy would be effective at reducing driver complacency and managing AV manufacturers’ assessment of traffic safety in relation to production costs.

Blind spots

Another issue with AVs is when they learn from training data sets that do not match reality. The AI systems powering driverless cars are trained extensively with virtual simulations although sometimes an unexpected error in the real world should but does not alter the car’s behaviour , exposing a blind spot in the programming. At a large scale, these faults could have devastating effects and so their abundance and severity should be accounted for when considering whether or not the technology in its current state is suitable for the transportation system. 

In order to minimize blind spots, MIT researchers have developed an approach where following simulation training, human drivers can provide error signals when the system’s actions are deemed unacceptable. The feedback from different drivers for similar events is then compiled and categorized as either acceptable or unacceptable and certain events can be labeled as blind spots accordingly. 

With increasing

exposure to AVs on

 the road, human drivers

could be less likely 

to exercise precaution

due to the notion

of greater safety ” 

A lack of moral nuance

The simplistic approach currently being used to address the ethical considerations of AV is inadequately designed to account for circumstantial differences and moral nuances. Researchers propose using an Agent-Deed-Consequence (ADC) model as a framework for making moral judgements based on intent, action, and outcome. This approach would allow the AI to have similar flexibility and stability as human moral judgement. Moreover, intent is an important distinction since vehicle terror attacks are highly effective, difficult to prevent, and are becoming more common. As a result, vigorous testing with driving simulation studies and better protocols should be implemented to prevent the use of self-driving technology with malicious intent and better assess the morality of traffic scenarios involving AVs.

Outlook

AVs add further dimensions to issues like data privacy and legal and moral questions in transportation. As a disruptive technology, this is natural and so it is more of a question of how to advance and streamline its integration. Ultimately, to more accurately engage with and understand the phenomenon of AV ethics, attention should not focus only on accident-type dilemmas, but rather on weighing the challenges relating to the design, capacity, limitations, and societal impacts of AVs.

 

Written by Miguel Larrucea Long 

LEARN MORE ABOUT AI

AI Readiness: will AI accentuate global inequality?
What is AI Readiness?   AI Readiness is the extent to which any organisation is prepared to take advantage of AI. In 2017, Oxford Insights published the first Government AI Readiness Index to...
Emotional AI: The New Step in Human Relationships
Emotional AI: The New Step in Human RelationshipsThe modern world is tough, fast and unforgiving. The increased expectations for individuals in both their work and leisure means that the way in...
Ethical Considerations for Artificial Intelligence in Government
Problems of AI for Government   There exists a degree of uncertainty in discussions about AI, driven by popular science depictions of intelligent machines as either possessing human-level...
Potential dangers of AI
With documentaries such as ‘The Big Hack’ or the infamous ‘Social Dilemma’, which shed light on big tech companies’ practices to profit off users’ data, internet users have grown more aware of how...
Ethical considerations for the use of AI in and around military conflicts
Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into...