About the Project

AI risks and rewards

Artificial intelligence (AI) is the most important technology of the twenty-first century. It will profoundly impact every industry and every segment of society. Some of AI’s clear potential benefits include safer and more convenient transportation, more efficient and productive manufacturing, better medical diagnostics and therapeutics, faster scientific breakthroughs, and improved personalized selection of goods, services, and relationships. Like any new technology, AI also presents some problems and risks. Examples include potential accidents from computer errors, unintended biases hidden in algorithms, new intrusions into personal privacy, technological unemployment, and potential destabilization of the existing social, economic and geopolitical status quo.

The governance problem

The central question facing policy makers around the world is how to manage these concerns. While overly restrictive government regulation could stifle innovation and block AI’s potential benefits, a governance vacuum can create regulatory uncertainty that discourages investment even while leaving citizens vulnerable to potential harms. Ideally, governance of AI would effectively address its risks and reassure public confidence, while being capable of evolving with rather than impeding its progress. Traditional legal and regulatory approaches, such as legislation and administrative agency rulemaking, take far too long to respond effectively to changes in the technology, with new rules growing obsolete even before they come into effect.

Soft Law: A new approach

An alternative approach that may hold promise is known as “soft law”—mechanisms that set forth substantive expectations but are not directly enforceable by government. Soft law offers some important advantages as a governance strategy for AI—it is flexible and adaptive, it is cooperative and inclusive, it incentivizes rather than punishes, and it can apply internationally. A number of interesting soft law proposals for AI have already been proposed, including private standards, voluntary programs, professional guidelines, codes of conduct, principles and other similar instruments.

Leading experts and scholars in governance and in AI technology will research, analyze, and debate various soft law mechanisms as potential governance approaches for AI. This effort includes three stages of research and analysis, focusing respectively on the past, the present and the future. In the first stage, focusing on the past, four leading scholars analyze the rich history of previous soft-law governance of technology. Their research provides a substantive analysis of the strengths and weakness, successes and failures, and lessons for AI from past soft-law approaches to governance of biotechnology, nanotechnology, information and communication technologies, and environmental technologies.

We created a publicly-accessible database—an invaluable resource for breakthrough research—in which we collected, compared, analyzed, and organized over 600 soft law programs directed at AI. We identify key substantive themes and recommendations that are common to most of the proposals, and evaluate how the wording of the substantive provisions affects their interpretation, implementation and compliance. The database provides a typology of the structural or procedural dimensions of each program, including the format of the governance instrument (e.g., standard, principle, code), the type of entity that proposed the program, the entities that are subject to the program, how it will be implemented, sources of funding and support, and any incentives or assurances of compliance, among others.

Finally, contributing scholars provided in-depth analysis, recommendations and guidance as to the substantive content and procedural posture of the best soft law approaches for AI going forward. At the completion of the research stages, the draft findings and recommendations of the project were presented and debated at a workshop to provide feedback and guidance on next steps.

The data and published materials produced as part of this project will be made publicly available. With these new freely available resources, researchers, practitioners, and policy makers will be able to make real progress on the central challenges of how to govern AI for the benefit of all.