A Layered Approach to Regulating AI

 

As technologies emerge and mature, the need for regulating AI grows. Artificial intelligence, increasingly an established part of our lives, is no exception. In this climate of polarization which features techno-optimists such as Mark Zuckerberg and pessimists headed by Elon Musk, it is difficult for governments to strike the right balance between fostering innovation on the one hand and protecting against risk on the other in regulating AI.

In Algorithmic Regulation, political and legal scholars theorize how artificial intelligence might be regulated and lessons we can learn from other technologies. The impact of AI itself is clearly vast, as machine learning and algorithms find greater and greater purchase across the economy. Although debate on the exact role of the state in regulating AI is easily exchanged across the aisle, particularly when spoken of in grand terms of either stifling innovation or neglecting welfare, it is quibbling over specific regulatory proposals that tends to be the most productive.

 

On regulating AI

 

One such proposal put forward by Jason Lohr, Winston Maxwell and Peter Watts (2019) seeks to create a framework that is layered and differentiated without being overly complex. The primary challenge of regulating AI and other emerging technologies is their precise nature: one that is emerging. To anticipate future risks given the current development of a technology, while accounting for the current state of risk without dampening further development is a daunting task. To be effective, regulation must be dynamic, building on previous laws and fundamentally open to change.

 

The case for sector-specific regulation

 

The layered regulatory model begins with a foundation of existing laws that govern liability, private property and contracts. These laws need little explanation and are well established. The second layer, corporate governance, is also a more well established field. Companies continually update internal policies to reflect competition and antitrust laws, and will do the same for artificial intelligence. 

 

On top of these foundational layers comes sector-specific regulation, in which particular government agencies are tasked with addressing AI concerns in their sector. This approach may seem counterproductive, but AI has such a diversity of application and consequence that a centralised framework would risk under or over-regulating. A general purpose definition of AI to begin the regulatory process across the board would be challenging enough.

 

Managing broad risks

 

There is room, however, for a broad regulator of risk who responds to existing impact and drafts ethical principles to address general concerns about algorithms, for instance. The British government already has such an agency, the Office for Artificial Intelligence, that is currently a subsidiary of other ministerial departments (BEIS and DCMS). White papers published by this office discuss larger principles of accountability and transparency. 

 

Lohr, Maxwell and Watts recommend that such an agency measure significant social consequences in regulating AI, such as media disinformation or privacy, in terms of explicit key performance indicators (KPIs). These can then serve as a yardstick for when sanctions might be applied or action might be taken by sector-specific regulators. Avoiding over-centralisation seems to be key to effective regulation that avoids choking innovation, while scrutinising technologies whose impact are as yet not fully known. 

 

Businesses and government departments will have to act in concert to ensure the furtherance of AI in a responsible manner. Safety and entrepreneurship need not be at odds if these partnerships are effective and bipartisan. As we look to the future, we need only remember that extremes and broad declarations are rarely as helpful as practical, sustainable policy frameworks, although it may only be the former which makes the news.

Written by Stephanie Sheir

LEARN MORE ABOUT AI

AI Readiness: will AI accentuate global inequality?
What is AI Readiness?   AI Readiness is the extent to which any organisation is prepared to take advantage of AI. In 2017, Oxford Insights published the first Government AI Readiness Index to...
The role of AI in data analytics and consequences for its ethical impact
The need for data analytics Data analytics is the process of observing, organising and then understanding huge quantities of raw data. With the world as it is becoming ever increasingly reliant on...
Privacy of the Commons: or why you should pay more attention to GDPR and your data privacy rights
Concerns about data privacy are changing   On the Internet, stranger danger is no longer a primary worry as consumers become more savvy, but data privacy is rapidly becoming an urgent issue...
The Black Box Problem: accountability in the age of the adolescent algorithm
As algorithms mature, existing legal models begin to show their age. Autonomous vehicles, or AVs, have become a topic of national conversation that generate equal parts optimism and pessimism about...
Legal Rights for Artificial Intelligence? An Introduction
Discussions around artificial intelligence typically raise the longer term question; are these agents persons, and if so, should we give legal rights to AI? Given the EU Parliament has adopted...