A Layered Approach to Regulating AI


As technologies emerge and mature, the need for regulating AI grows. Artificial intelligence, increasingly an established part of our lives, is no exception. In this climate of polarization which features techno-optimists such as Mark Zuckerberg and pessimists headed by Elon Musk, it is difficult for governments to strike the right balance between fostering innovation on the one hand and protecting against risk on the other in regulating AI.

In Algorithmic Regulation, political and legal scholars theorize how artificial intelligence might be regulated and lessons we can learn from other technologies. The impact of AI itself is clearly vast, as machine learning and algorithms find greater and greater purchase across the economy. Although debate on the exact role of the state in regulating AI is easily exchanged across the aisle, particularly when spoken of in grand terms of either stifling innovation or neglecting welfare, it is quibbling over specific regulatory proposals that tends to be the most productive.


On regulating AI


One such proposal put forward by Jason Lohr, Winston Maxwell and Peter Watts (2019) seeks to create a framework that is layered and differentiated without being overly complex. The primary challenge of regulating AI and other emerging technologies is their precise nature: one that is emerging. To anticipate future risks given the current development of a technology, while accounting for the current state of risk without dampening further development is a daunting task. To be effective, regulation must be dynamic, building on previous laws and fundamentally open to change.


The case for sector-specific regulation


The layered regulatory model begins with a foundation of existing laws that govern liability, private property and contracts. These laws need little explanation and are well established. The second layer, corporate governance, is also a more well established field. Companies continually update internal policies to reflect competition and antitrust laws, and will do the same for artificial intelligence. 


On top of these foundational layers comes sector-specific regulation, in which particular government agencies are tasked with addressing AI concerns in their sector. This approach may seem counterproductive, but AI has such a diversity of application and consequence that a centralised framework would risk under or over-regulating. A general purpose definition of AI to begin the regulatory process across the board would be challenging enough.


Managing broad risks


There is room, however, for a broad regulator of risk who responds to existing impact and drafts ethical principles to address general concerns about algorithms, for instance. The British government already has such an agency, the Office for Artificial Intelligence, that is currently a subsidiary of other ministerial departments (BEIS and DCMS). White papers published by this office discuss larger principles of accountability and transparency. 


Lohr, Maxwell and Watts recommend that such an agency measure significant social consequences in regulating AI, such as media disinformation or privacy, in terms of explicit key performance indicators (KPIs). These can then serve as a yardstick for when sanctions might be applied or action might be taken by sector-specific regulators. Avoiding over-centralisation seems to be key to effective regulation that avoids choking innovation, while scrutinising technologies whose impact are as yet not fully known. 


Businesses and government departments will have to act in concert to ensure the furtherance of AI in a responsible manner. Safety and entrepreneurship need not be at odds if these partnerships are effective and bipartisan. As we look to the future, we need only remember that extremes and broad declarations are rarely as helpful as practical, sustainable policy frameworks, although it may only be the former which makes the news.

Written by Stephanie Sheir


Air Travel and AI, Ready for Takeoff ?
The Air Travel Industry has seen multiple changes in its market throughout the past century. With the digital revolution currently underway actors in this sector need to embrace emerging...
What is AI for Good?
AI is often discussed in the context of incorporating ethics into automation or data analysis. This is different from AI for Good. At the AIBE Summit 2020, Manoj Saxena spoke about AI Global, which...
Privacy of the Commons: or why you should pay more attention to GDPR and your data privacy rights
Concerns about data privacy are changing   On the Internet, stranger danger is no longer a primary worry as consumers become more savvy, but data privacy is rapidly becoming an urgent issue...
Ethical considerations for the use of AI in and around military conflicts
Seeking to build a virtual fence dividing North and South Vietnam, the US military forayed into the modern electronic battlefield in 1970. Amongst the tens of thousands of devices it dropped into...
Emotional AI: The New Step in Human Relationships
Emotional AI: The New Step in Human RelationshipsThe modern world is tough, fast and unforgiving. The increased expectations for individuals in both their work and leisure means that the way in...