The Report


This research project identified 634 soft law programs directed at methods and applications of AI. In compiling this data, we learned a great deal about the state of AI soft law. For instance, this type of governance is a relatively recent phenomenon with over 90% of programs being created between 2017 and 2019. We dispelled the notion that these instruments are uniquely suited for private sector self-regulation since the largest group, approximately 36%, were generated by public sector entities. We found that most programs were generated in a cluster of high-income countries, dominated by the US, UK, Europe, or are global in nature. We confirmed that soft law’s main characteristics, its voluntary nature, continues to be a leading disadvantage as 69% of programs do not publicly list enforcement or implementation mechanisms. Lastly, we created a library of over 6,000 excerpts that catalog the text of programs using 15 themes and 78 sub-themes

Soft law is not a panacea or silver bullet. By itself, it is unable to solve all of the governance issues experienced by society due to AI. Nevertheless, whether by choice or necessity, soft law is and will continue to play a central role in the governance of AI for some time. As such, it is important to build-upon the lessons that emanate from this research to make soft law as effective and credible as possible so it can address the governance challenges of AI systems, including safety, reliability, privacy, transparency, fairness, and accountability.

The ultimate goal with this research project is to inform decision-makers with evidence, practices, and recommendations that can be harnessed to enhance soft law programs. Through this information, our hope is that all parties can improve how they manage applications and methods of AI under their responsibility.