{"id":170,"date":"2021-09-01T20:11:21","date_gmt":"2021-09-01T20:11:21","guid":{"rendered":"https:\/\/lsi.asulaw.org\/softlaw\/?post_type=report&#038;p=170"},"modified":"2021-10-18T17:28:44","modified_gmt":"2021-10-18T17:28:44","slug":"2-7-themes","status":"publish","type":"report","link":"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-7-themes\/","title":{"rendered":"2.7 Themes"},"content":{"rendered":"\n<div id=\"standard-header-block_612fde6108115\" class=\"standard-header alignfull standard-grid\">\n    \n    <div class=\"header-image standard-grid\">\n        <img src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/09\/research-header-1.jpg\">\n\n        <div class=\"page-title\">\n                            \n\n<p class=\"supertitle\"><\/p>\n\n                        <h1>2.7 Themes<\/h1>\n        <\/div>\n    <\/div>\n\n   \n<\/div>\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\" style=\"flex-basis:70%\">\n<p>Every program\u2019s text was classified into 15 themes and further subdivided into 78 sub-themes (see methodology section for details on how these divisions were created). Table 11 presents the top five results in both categories. It finds that education\/displacement of labor is the theme with the highest number of excerpts in the database with 815. This means that text related to education\/displacement of labor were found 815 times throughout the 634 soft law programs. Meanwhile, the sub-theme of general transparency appears in ~43% of programs. Readers of this section will find that each theme contains a description of the sub-theme, a table with the percentage of programs that contain each sub-theme, and representative excerpts. The database also contains the prevalence of sub-themes by type of soft law program.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" width=\"1024\" height=\"303\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-13-1024x303.png\" alt=\"\" class=\"wp-image-1498\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-13-1024x303.png 1024w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-13-300x89.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-13-768x227.png 768w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-13.png 1352w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.1 Accountability<\/h2>\n\n\n\n<p>Society is gradually bestowing AI-powered systems with autonomy to make decisions affecting individuals in lethal and non-lethal ways. In this theme, readers will find language highlighting the continuum of issues related to the bearing of responsibility for the unplanned actions and accidents caused by AI systems (see Table 12).<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" width=\"1024\" height=\"390\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-14-1024x390.png\" alt=\"\" class=\"wp-image-1500\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-14-1024x390.png 1024w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-14-300x114.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-14-768x293.png 768w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-14.png 1028w\" sizes=\"(max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<p>About 15% of programs make general mention of accountability. They allude to the term, loosely define it, or state its importance to the program\/society. A further ~21% recognizes the need for measures or mechanisms to ensure that accountability is considered. This is done by suggesting the creation of committees, the implementation of procedures, or anything in between. They range from general indications, such as what the Council of Europe\u2019s Commissioner for Human Rights describes: \u201cmember states must establish clear lines of responsibility for human rights violations that may arise at various phases of an AI system lifecycle\u201d [126] or as specific as declared by the American Civil Liberties Union regarding accountability: \u201can entity must maintain a system which measures compliance with these principles including an audit trail memorializing the collection, use, and sharing of information in a facial recognition system\u201d [127].<\/p>\n\n\n\n<p>Some programs take a position as to who is primarily responsible for an AI system\u2019s actions. Around 11% single out organizations. They discuss the need to establish the type and extent of liability borne by firms or declare outright that legal persons should be the entities accountable for AI. This point of view is shared by the Association for Computing Machinery: \u201cinstitutions should be held responsible for decisions made by the algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results\u201d [72].<\/p>\n\n\n\n<p>With an opposing view, ~9% of programs affirm that humans, in the form of individual developers, operators, or decision-makers, are ultimately responsible for AI systems: \u201cresponsibility for these insights falls to humans, who must anticipate how rapidly changing AI models may perform incorrectly or be misused and protect against unethical outcomes, ideally before they occur\u201d [128]. In between these positions, there is a 3% segment holding both parties accountable. They either differentiate the types of activities to which humans and non-humans are responsible for, assign responsibility to both, or are unsure as to which should bear the consequences:<\/p>\n\n\n\n<ul><li>\u201cLegal responsibility should be attributed to a person. The unintended nature of possible damages should not automatically exonerate manufacturers, programmers or operators from their liability and responsibility\u201d[129]; and,<\/li><li>\u201cInstitutions and decision makers that utilize AI technologies must be subject to accountability that goes beyond self-regulation\u201d[51].<\/li><\/ul>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.2 Artificial general intelligence<\/h2>\n\n\n\n<p>Defined as \u201chighly autonomous systems that outperform humans at most economically valuable work\u201d, artificial general intelligence (AGI) is the next step in this technology\u2019s evolution [41]. Few programs spotlight AGI, which makes sense considering it is thought to be decades away from development. When discussed, ~1.4% of programs express traits desirable in such systems (see Table 13). The Chinese Academy of Sciences published a number of principles detailing the philosophy that should guide the creation of AI-based conscious beings, including: empathy, altruism, and have a sense of how to relate with current and future humans [130].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"574\" height=\"300\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-15.png\" alt=\"\" class=\"wp-image-1501\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-15.png 574w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-15-300x157.png 300w\" sizes=\"(max-width: 574px) 100vw, 574px\" \/><\/figure><\/div>\n\n\n\n<p>About 1.6% of programs discuss how AGI should be developed and managed by decision-makers. This includes the research agenda to be prioritized (e.g. \u201cautonomous decomposition of difficult tasks, as well as seeking and synthesizing solutions\u201d [120]) or what governance mechanisms ought to be implemented (e.g. \u201curges the Commission to exclude from EU funding companies that are researching and developing artificial consciousness\u201d [131]).<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.3 Bias<\/h2>\n\n\n\n<p>AI systems inevitably perpetuate the prejudices inherited in their design or emanating from the underlying data selected for their training. Over a third of the soft law in this database recognizes bias or discrimination in a general manner by stating the term or emphasizing the importance in avoiding its occurrence.<\/p>\n\n\n\n<p>In tackling this issue, programs take different approaches (see Table 14). In ~15% of cases, diversity is a term that represents the creation of a multidisciplinary workforce as a tool to combat the bias of AI systems: \u201cwe strive to use teams with people from diverse backgrounds to design solutions using artificial intelligence\u201d [132] and \u201cunless we build AI using diverse teams, data sets and design, we are at risk of repeating the inequality of previous revolutions\u201d [133].<\/p>\n\n\n\n<p>Meanwhile, there are programs that highlight the relevance of including populations that are generally excluded due to demographic or health characteristics (~10%): \u201cAI should facilitate the diversity and inclusion of individuals with disabilities in the workplace\u201d [134]. Lastly, ~16% of programs address bias by suggesting actionable mechanisms to decrease its impact: \u201ca board should be created at EU level to monitor risks of discrimination, bias and exclusion in the use of AI systems by any organisation\u201d [135].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.4 Displacement of labor and education<\/h2>\n\n\n\n<p>The impetus for this theme was to unearth the relationship between the labor market and AI. Closely linked to it are the educational and research initiatives highlighting alternatives to ameliorate the overarching effects of this technology on population dynamics or to improve its contributions to society. Considering this, the text herein was distributed into three groups: labor, education, and research (see Table 15).<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"982\" height=\"538\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-16.png\" alt=\"\" class=\"wp-image-1502\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-16.png 982w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-16-300x164.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-16-768x421.png 768w\" sizes=\"(max-width: 982px) 100vw, 982px\" \/><\/figure><\/div>\n\n\n\n<p>The first group clusters the perceived consequences of AI on labor. It begins with ~16% programs that mention the possibility of job loss and the importance of avoiding it: \u201cwe have a responsibility to ensure that vulnerable workers in our supply chain are not facing significant negative impacts of AI and automation\u201d [136] and \u201cobserve principles of fair employment and labor practices\u201d [52]. A second group, ~6% of the sample, proposes a variety of alternatives to fight job loss, such as incentivizing communication-based activities: \u201call stakeholders should engage in an ongoing dialogue to determine the strategies needed to seize upon artificial intelligence\u2019s vast socio-economic opportunities for all, while mitigating its potential negative impacts\u201d [137]. The last label within this group, ~7% of the database, stresses the opposite of the first two, the labor efficiencies possible through AI such as: \u201csimplifying processes and eliminating redundant work increases productivity\u201d [138] and \u201caccessible AI promotes growth and increased employment, and benefits society as a whole\u201d [139].<\/p>\n\n\n\n<p>Education is inextricably linked to preparing future generations for the demographic shifts caused by this technology. One of the most popular sub-themes in this database, appearing in ~38% of programs, remarks on the importance of providing the pedagogical and andragogical tools to facilitate AI literacy:<\/p>\n\n\n\n<ul><li>\u201cthe IBM company will work to help students, workers and citizens acquire the skills and knowledge to engage safely, securely and effectively in a relationship with cognitive systems, and to perform the new kinds of work and jobs that will emerge in a cognitive economy\u201d [78]; and,<\/li><li>\u201cupdate the education curriculum to refocus skills sets on AI under the umbrella of media and information literacy in preparation of the next generation of workers for AI adoption\u201d [140].<\/li><\/ul>\n\n\n\n<p>Another section details the skills or retraining (~12%), not necessarily related to AI, that individuals whose livelihood is directly affected by this technological shift will face in order to continue earning a living: \u201clowskilled workers are more likely to suffer job losses\u2026improving skills and competences is thus important to enable wider participation in the opportunities offered by new forms of work and for promoting an inclusive labour market\u201d [141]. To complement both of these efforts, a small percentage of programs (~2%) referred to the ability of AI to aide in the provision of education: \u201cconversational agents have huge potential to educate students\u2026AI enhances our ability to understand the meaning of content at scale and serve it in meaningful and customized ways\u201d [142].<\/p>\n\n\n\n<p>While organizations await the influx of a new wave of AI literate workers, there are active efforts to recruit experts and specialists from around the world (~10%): \u201cit is widely acknowledged that there is a skill gap in the agritech space and companies do not have time to wait for New Zealand to develop talent entirely on its own. Immigration policy should be continually monitored to allow rapid importing of the skills across the continuum to meet expected growing demand\u201d [30].<\/p>\n\n\n\n<p>The third grouping in this theme centers on research. All types of organizations (e.g. universities, firms, and governments) are incenting basic and applied AI research to improve their competitiveness. There are programs that describe research projects currently in progress or ideas that should be undertaken (~22%): \u201cresearch Councils could support new studies investigating the consequences of deepfakes for the UK population, as well as fund research into new detection methods\u201d [143]. The last sub-theme links research with society (~16%). Here, readers will find text on technology transfer opportunities, commercialization of AI discoveries, or partnerships with academia to bring research to the public: \u201cDoD should advance the science and practice of VVT&amp;E of AI systems, working in close partnership with industry and academia\u201d [144].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.5 Environment<\/h2>\n\n\n\n<p>The impact of AI on the environment is not covered extensively in the database (see Table 16). Relating the technology to its planetary impact through general statements occurred in ~9% of programs. One professional association asks its members to \u201cpromote environmental sustainability both locally and globally\u201d [43].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"582\" height=\"300\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-17.png\" alt=\"\" class=\"wp-image-1503\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-17.png 582w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-17-300x155.png 300w\" sizes=\"(max-width: 582px) 100vw, 582px\" \/><\/figure><\/div>\n\n\n\n<p>Specific mention of AI\u2019s aptitudes to improve the conservation of resources through efficiencies (3%) or in disaster management scenarios (~0.5%) was even more rarely discussed: \u201cAI can highly improve the energy sector in Mauritius namely by\u2026using IoT and neural algorithms to increase energy efficiency\u201d [145] and \u201cAI can be used in many aspects of preparation for and response to natural disasters and extreme events, such as hurricane winds and storm-related flooding\u201d [146].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.6 Ethics<\/h2>\n\n\n\n<p>This theme exhibits the moral compass or ideals that guide how organizations employ AI (see Table 17). At the surface level, the term ethics is mentioned without offering much detail as to its meaning (~19%). A similar phenomenon occurs with values (~25%) and culture (~5%) where, in many cases, they are used broadly: \u201cenable a kind of a \u2018passport of values\u2019 whereby systems can learn one\u2019s personal value preferences, an important part of prosocial behavior\u201d [147] and \u201cthe development of AI technologies and their effects must always be in accordance with current legislation and respect local cultural and social norms\u201d [148].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"514\" height=\"466\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-18.png\" alt=\"\" class=\"wp-image-1504\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-18.png 514w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-18-300x272.png 300w\" sizes=\"(max-width: 514px) 100vw, 514px\" \/><\/figure><\/div>\n\n\n\n<p>Many programs expressed hopeful thoughts or commitments about the need to ensure that the technology has a positive impact on society (~19%): \u201cdata and AI should enhance societies, strengthen communities, and ameliorate the lives of vulnerable groups\u201d [61]. Further, rights, in particular human rights, were extolled as a vital requirement to be respected by the technology (~17%): \u201cA\/IS shall be created and operated to respect, promote, and protect internationally recognized human rights\u201d [149].<\/p>\n\n\n\n<p>Conversely, there are programs that emphasize AI\u2019s negative ethical consequences (~9%): \u201ccalls on the Commission to propose a framework that penalises perception manipulation practices when personalized content or news feeds lead to negative feelings and distortion of the perception of reality that might lead to negative consequences\u201d [131]. Almost a third of programs (~29%) mention or suggest actions to ensure AI remains ethical. These include measures such as: \u201cban AI-enabled mass scale scoring of individuals as defined in our Ethics Guidelines\u201d [150] and \u201cestablish a charter of ethics for Intelligent IT to minimize any potential abuse or misuse of advanced technology by presenting a clear ethical guide for developers and users alike\u201d [151].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.7 Health<\/h2>\n\n\n\n<p>The convergence of health technologies and AI promises to deliver significant value-added to the provision of medical services (see Table 18). About 6% of the database stresses the variety of benefits possible for this application of AI. These descriptions range from general statements such as \u201ccertain medical treatments or diagnoses might be carried out better with a robot\u201d [152] to specific advantages: \u201ccarry out large-scale genome recognition, proteomics, metabolomics, and other research and development of new drugs based on AI, promote intelligent pharmaceutical regulation\u201d [34]. Furthermore, some programs (~3%) spotlight the health and well-being of patients as a central node in the field: \u201ca guiding principle for both humans and health technology is that, whatever the intervention or procedure, the patient\u2019s well-being is the primary consideration\u201d [153].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"854\" height=\"466\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-19.png\" alt=\"\" class=\"wp-image-1505\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-19.png 854w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-19-300x164.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-19-768x419.png 768w\" sizes=\"(max-width: 854px) 100vw, 854px\" \/><\/figure><\/div>\n\n\n\n<p>To ensure the enduring nature of this technology\u2019s advantages, programs stress its development, governance, and ability of individuals to access it. In terms of development, having manufacturers and clinicians work together can help ensure that AI is safely created and implemented effectively (~2%): \u201cclinicians can and must be part of the change that will accompany the development and use of AI\u201d [154]. Text that delves in the governance of healthcare AI attempts to verify that any device that assists in making life and death decisions does so in a manner that follows agreed upon practices or industrial standards (~7%). One standard created specifically for this purpose is aimed at helping manufacturers \u201cthrough the key decisions and steps to be taken to perform a detailed risk management and usability engineering processes for medical electrical equipment or a medical electrical system, hereafter referred to as mee or mes, employing a degree of autonomy\u201d [155]. Finally, if access to this technology is out of reach for large swaths of the population, its ability to positively contribute to society will be hampered. Statements discussing the need to make this technology available are represented in ~2% of the sample: \u201cfair distribution of the benefits associated with robotics and affordability of homecare and healthcare robots in particular\u201d [156].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.8 Meaningful human control<\/h2>\n\n\n\n<p>AI systems are capable of decision-making at speeds that are beyond human capabilities. This theme discloses the desire to rein-in the technology through diverse means (see Table 19). At its most basic level, it remarks that humans need to be involved in the operation of AI systems (~28%), be it through governance (\u201cwe can make sure that robot actions are designed to obey the laws humans have made\u201d [157]) or mechanically (\u201cwe are able to deactivate and stop AI systems at any time (kill switch)\u201d [33]).<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"690\" height=\"466\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-20.png\" alt=\"\" class=\"wp-image-1506\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-20.png 690w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-20-300x203.png 300w\" sizes=\"(max-width: 690px) 100vw, 690px\" \/><\/figure><\/div>\n\n\n\n<p>The meaningful-human control sub-themes also describe a continuum of human participation in AI decision-making. For instance, at any time, individuals should be given the ability to opt-out of these systems (~4%), \u201cto establish a right to be let alone, that is to say a right to refuse to be subjected to profiling\u201d [158], or be free to make their own decisions without being nudged in a particular direction (~7%), \u201calgorithms and automated decision-making may raise concerns over loss of self-determination and human control\u201d [159].<\/p>\n\n\n\n<p>Prior to the engagement of these systems, about 16% of programs discuss the need to involve stakeholders (e.g. the public and affected entities) in their development: \u201cno jurisdiction should adopt face recognition technology without going through open, transparent, democratic processes, with adequate opportunity for genuinely representative public input and objection\u201d [95]. While ~7% stipulate that consent of any kind should be requested from users before participating in processes that involves an AI system: \u201cadvocate for general adoption of revised forms of consent\u2026for appropriately safeguarded secondary use of data\u201d [128].<\/p>\n\n\n\n<p>Subsequent to being subjected to a decision enacted by this technology, a proportion of programs (~14%) advocate for the right of individuals to seek an explanation for decisions, have these overturned, or dispute them after the fact: \u201cmake available externally visible avenues of redress for adverse individual or societal effects of an algorithmic decision system, and designate an internal role for the person who is responsible for the timely remedy of such issues\u201d [160].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.9 Privacy<\/h2>\n\n\n\n<p>Freedom from surveillance or wholesale analysis of an individual\u2019s data exhaust is a timely subject (Table 20). This is especially the case in an era where AI applications can intrude into the public\u2019s life in ways that no humans were ever capable of doing in the past. About 22% of programs mention the word privacy in a general manner or stress the importance of its protection: \u201cany system, including AI systems, must ensure people\u2019s private data is protected and kept confidential\u201d [161].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"584\" height=\"364\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-21.png\" alt=\"\" class=\"wp-image-1507\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-21.png 584w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-21-300x187.png 300w\" sizes=\"(max-width: 584px) 100vw, 584px\" \/><\/figure><\/div>\n\n\n\n<p>In second place, at ~21%, programs mention systems or mechanisms that may protect user\u2019s information: \u201crestricting third party access unless disclosed and necessary to the original purpose or application as stated in the Purpose Specification or in response to a legal order\u201d [162]. To complement these mechanisms, ~18% of programs discuss their compliance with regulations whose purpose is primarily to ensure privacy: \u201cwhile there is no single approach to privacy, IBM complies with the data privacy laws in all countries and territories in which we operate\u201d [163]. Lastly, a small proportion of programs (~3%) discusses harnessing AI to improve privacy practices: \u201ctechnologies for cyber security and privacy protection must be advanced\u201d [164].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.10 Private sector development<\/h2>\n\n\n\n<p>Private firms are the spearhead behind the research, development, and commercialization of most AI innovations (see Table 21). This sub-theme compiles the programs attempting to catalyze the development of the private sector, most of which (~79%) have government involvement. In fact, ~15% of programs describe, in general terms, the role of government in supporting the AI industry: \u201cuphold open market competition to prevent monopolization of AI\u201d[165]. Furthermore, we found programs that specifically backed efforts related to promoting the sector\u2019s competitiveness (~9%) and entrepreneurship via small and medium businesses (~11%):<\/p>\n\n\n\n<ul><li>&#8220;Sweden\u2019s greatest opportunities for competitiveness within AI lies within a mutual interaction between innovative AI application in business and innovative organization of society\u201d [166], and;<\/li><li>&#8220;Assist SMEs to develop AI applications through AI Pilot projects, data platforms, test fields and regulatory co-creation processes\u201d [167].<br><\/li><\/ul>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"792\" height=\"468\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-22.png\" alt=\"\" class=\"wp-image-1508\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-22.png 792w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-22-300x177.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-22-768x454.png 768w\" sizes=\"(max-width: 792px) 100vw, 792px\" \/><\/figure><\/div>\n\n\n\n<p>Non-government parties can also act to improve the conditions and progress of the AI sector. In ~6% of programs, firms created mechanisms such as internal governance structures, performance indicators, or strategies that recognize the potential of AI. Meanwhile, ~1% of the sample discusses attempts by private and non-government entities to align themselves with corporate social responsibility goals (e.g. creating sustainable development goals).<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.11 Role of government\/governance<\/h2>\n\n\n\n<p>This theme contains text with governance efforts related to the general management of AI (see Table 22). Without specifying a particular sector, a quarter of programs reference public entities as a key promoter or arbiter of AI for example: \u201cavoid excessive legal constraints on artificial intelligence research\u201d [168]. Organizations outside of government, mainly private sector and non-profits, also comment on their role in working and supervising the technology (~13%): \u201cwe need both governance and technical solutions for the responsible development and use of AI\u201d [169].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"866\" height=\"318\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-23.png\" alt=\"\" class=\"wp-image-1509\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-23.png 866w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-23-300x110.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-23-768x282.png 768w\" sizes=\"(max-width: 866px) 100vw, 866px\" \/><\/figure><\/div>\n\n\n\n<p>Any text that highlights public-private partnerships, creation of alliances, or participation in multilateral fora related to AI systems was classified in the cooperation between parties to govern AI sub-theme (~21%): \u201cwe encourage states to promote the worldwide application of the eleven guiding principles as affirmed by the GGE and as attached to this declaration and to work on their further elaboration and expansion\u201d [170].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.12 Safety<\/h2>\n\n\n\n<p>One of the most important debates regarding AI systems relates to their ability to cause bodily harm and how to minimize it (see Table 23). Whether it is purposefully as an autonomous weapon or as an unplanned event in the form of an accident, this sub-theme delves into how programs contend with safety issues.<\/p>\n\n\n\n<p>The first part of this sub-theme relates to the overarching safety of AI. In this sense, around 19% of programs include normative statements on the need for the technology to be safe and avoid or minimize physical harm to people: \u201cthere is a need for a public discussion about the safety society expects from automated cars\u201d [171]. This is followed by a discussion on the mechanisms that ought to be implemented to ensure the technology\u2019s safety (~15%), including instituting procedures or processes, as well as standards and regulations: \u201call the stakeholders including industry, government agencies and civil society should deliberate to evolve guidelines for safety features for the applications in various domains\u201d [172].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"454\" height=\"462\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-24.png\" alt=\"\" class=\"wp-image-1510\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-24.png 454w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-24-295x300.png 295w\" sizes=\"(max-width: 454px) 100vw, 454px\" \/><\/figure><\/div>\n\n\n\n<p>The second part of the sub-theme focuses on the weaponization of AI. Discussion of the military uses of the technology and the imposition of restrictions on autonomous weapon systems both appear in about 6% of programs:<\/p>\n\n\n\n<ul><li>&#8220;Considering the increasing proliferation of autonomous systems, including among adversaries, the RNLA should continue to experiment with systems that may enhance its portfolio\u201d [173]; and,<\/li><li>&#8220;We deny that AI should be employed for safety and security applications in ways that seek to dehumanize, depersonalize, or harm our fellow human beings\u201d [174].<\/li><\/ul>\n\n\n\n<p>The third, and last part, deliberates on AI as an information gathering technology at the national security (~3%) and the local level through law enforcement (~3%):<\/p>\n\n\n\n<ul><li>&#8220;Understanding the need to protect privacy and national security, AI systems should be deployed in the most transparent manner possible\u201d [175]; and,<\/li><li>&#8220;Law enforcement needs for AI and robotics should be identified, structured, categorized and shared to facilitate development of future projects\u201d [176].<\/li><\/ul>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.13 Security\/reliability<\/h2>\n\n\n\n<p>This theme is divided into two areas relative to protecting the integrity of AI systems (security) and ensuring their optimal operation (reliability) (see Table 24). On the security side, it encompasses risks to system integrity and the mechanisms to prevent adversarial attacks of the cyber variety (~26%): \u201cmanufacturers providing vehicles and other organisations supplying parts for testing will need to ensure that all prototype automated controllers and other vehicle systems have appropriate levels of security built into them to manage any risk of unauthorised access\u201d [177]. Within the context of security, our team added a sub-theme that targets text discussing the protection of data from third-parties (~14%): \u201cthe development of AIS must preempt the risks of user data misuse and protect the integrity and confidentiality of personal data\u201d [178]. The last part of security entails any text discussing working with AI to thwart cyber-attacks (~5%): \u201cby using different algorithms to parse and analyze data, machine learning empowers AI to become capable of learning and detecting patterns that would help in identifying and preventing malicious acts within the cybersecurity space\u201d [140].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"790\" height=\"520\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-25.png\" alt=\"\" class=\"wp-image-1511\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-25.png 790w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-25-300x197.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-25-768x506.png 768w\" sizes=\"(max-width: 790px) 100vw, 790px\" \/><\/figure><\/div>\n\n\n\n<p>The second section of this theme concerns reliability. About 18% of programs include normative statements on reliability, interoperability, or trustworthiness of AI systems: \u201cutilize emerging frameworks that will help ensure AI technologies are safe and reliable\u201d [179]. In case of a system outage, ~7% of programs highlight the need for procedures to offset the failure of the technology: \u201corganizations should ensure that reliable contingencies are in place for when AI systems fail, or to provide services to those unable to access these systems\u201d [180]. The last two sub-themes labeled text describing factors that affect data quality (~9%) and mechanisms to confirm the functionality of an AI system (~15%):<\/p>\n\n\n\n<ul><li>&#8220;Users and data providers should pay attention to the quality of data used for learning or other methods of AI systems\u201d [181]; and,<\/li><li>&#8220;Solutions should be rigorously tested for vulnerabilities and must be verified safe and protected from security threats\u201d [182].<\/li><\/ul>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.14 Transparency and explainability<\/h2>\n\n\n\n<p>This theme focuses on conveying information to stakeholders on AI systems in a manner that is understandable and clear (see Table 25). Two sub-themes labeled data on the general use of transparency (~43%) and explainability (~24%) throughout programs, the former being the most popular sub-theme of the database.<\/p>\n\n\n\n<p>A group of sub-themes deals with the information relationship between AI systems and individuals. For instance, ~6% programs suggest that individuals should be informed about any interaction with AI: \u201cindividuals should always be aware when they are interacting with an AI system rather than a human\u201d [183]. Another sub-theme focuses on how individuals are subjected to consequential decisions by this technology, how they ought to know of them, and receive an explanation (~12%): \u201cdata subjects\u2026 have a right to obtain information on the reasoning underlying AI data processing operations applied to them\u201d [184].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"908\" height=\"516\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-26.png\" alt=\"\" class=\"wp-image-1512\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-26.png 908w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-26-300x170.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-26-768x436.png 768w\" sizes=\"(max-width: 908px) 100vw, 908px\" \/><\/figure><\/div>\n\n\n\n<p>To counter information asymmetry, one of the sub-themes highlights efforts to increase the awareness surrounding AI systems to the public or generally creating open lines of communication amongst stakeholders (~14%): \u201claw enforcement should endeavor to completely engage in public dialogue regarding purpose-driven facial recognition use\u201d [185]. A complementary label is applied to efforts looking to share AI-relevant databases amongst institutions (~16%): \u201cdevelop shared public datasets and environments for AI training and testing\u201d [186]. The last sub-theme in this section indicates where the data used by AI systems originated or how it is used in the training of systems (~12%): \u201cidentification of the type of biometric that is captured\/stored and its relevance to the purpose for which it is being captured\/store\u201d [162].<\/p>\n\n\n\n<h2 class=\"is-style-section-title\">2.7.15 Transportation\/urban planning<\/h2>\n\n\n\n<p>This theme guides readers through programs interested on AI applications related to transportation (land and air) and their interaction to the urban environment. Starting with the user, programs discuss how an individual controls and communicates with AI transportation systems (2%): \u201cwhen the vehicle is driven by vehicle systems that do not require the driver to perform the driving task, the driver can engage in activities other than driving\u201d [187]. The theme scales up to one application of this technology, aircraft vehicles (~1%): \u201cdevelop standards and guidelines for the safety, performance, and interoperability of fully autonomous flights\u201d [188].<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full\"><img loading=\"lazy\" width=\"790\" height=\"522\" src=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-27.png\" alt=\"\" class=\"wp-image-1513\" srcset=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-27.png 790w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-27-300x198.png 300w, https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2021\/10\/image-27-768x507.png 768w\" sizes=\"(max-width: 790px) 100vw, 790px\" \/><\/figure><\/div>\n\n\n\n<p>The next set of sub-themes focus on the physical and non-physical support systems for AI-based transportation. Many programs discuss the infrastructure requirements needed for these applications to operate (~4%): \u201cAI industry to work with telecommunications providers on specific needs for AI-supportive telecommunications infrastructure\u201d [189]. Others center on the array of rules, guidelines, and regulations meant to govern their utilization (~10%): \u201cthis document establishes minimum functionality requirements that the driver can expect of the system, such as the detection of suitable parking spaces\u201d [190]. Meanwhile, there are a number of proposals for managing traffic (~6%): \u201cwe can make mobility safer assisting human abilities and greener through platooning heavy goods vehicles to lower emissions and promoting public transport\u201d [191]. Finally, there is an urban planning efficiency sub-theme dealing with sustainability efforts and resource management related to AI, but unrelated to traffic (3%): \u201cAI-enabled solutions in the mobility and transportation sectors could go a long way in making cities more sustainable\u201d [159].<\/p>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column\">\n<div id=\"table-of-contents-block_612fde6108116\" class=\"table-of-contents\">\n    <div class=\"toc-container\">\n        <h2 class=\"is-style-sidebar-title\">Table of Contents<\/h2>\n        <ul>\n            <li class=\"page_item page-item-129\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/executive-summary\/\">Summary<\/a><\/li>\n<li class=\"page_item page-item-152\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/introduction\/\">Introduction<\/a><\/li>\n<li class=\"page_item page-item-153 page_item_has_children\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/1-methodology\/\">1. Methodology<\/a>\n<ul class='children'>\n\t<li class=\"page_item page-item-154\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/1-methodology\/1-1-identification\/\">1.1 Identification<\/a><\/li>\n\t<li class=\"page_item page-item-155\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/1-methodology\/1-2-screening-and-classification\/\">1.2 Screening and classification<\/a><\/li>\n\t<li class=\"page_item page-item-156\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/1-methodology\/1-3-limitations\/\">1.3 Limitations<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"page_item page-item-160 page_item_has_children\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/\">2. Results<\/a>\n<ul class='children'>\n\t<li class=\"page_item page-item-161\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-1-year-of-publication\/\">2.1 Year of publication<\/a><\/li>\n\t<li class=\"page_item page-item-162\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-2-geography\/\">2.2 Geography<\/a><\/li>\n\t<li class=\"page_item page-item-165\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-3-influence\/\">2.3 Influence<\/a><\/li>\n\t<li class=\"page_item page-item-167\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-4-type-of-program\/\">2.4 Type of program<\/a><\/li>\n\t<li class=\"page_item page-item-168\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-5-role-of-stakeholders\/\">2.5 Role of stakeholders<\/a><\/li>\n\t<li class=\"page_item page-item-169\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-6-enforcement\/\">2.6 Enforcement<\/a><\/li>\n\t<li class=\"page_item page-item-170\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/2-results\/2-7-themes\/\">2.7 Themes<\/a><\/li>\n<\/ul>\n<\/li>\n<li class=\"page_item page-item-171\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/conclusion\/\">Conclusion<\/a><\/li>\n<li class=\"page_item page-item-172\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/references\/\">References<\/a><\/li>\n<li class=\"page_item page-item-173\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/appendix-1-sources-of-information\/\">Appendix 1 \u2013 Sources of information<\/a><\/li>\n<li class=\"page_item page-item-174\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/appendix-2-keyword-search\/\">Appendix 2 \u2013 Keyword search<\/a><\/li>\n<li class=\"page_item page-item-1528\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/appendix-3\/\">Appendix 3 \u2013 Screening of soft law programs<\/a><\/li>\n<li class=\"page_item page-item-1532\"><a href=\"https:\/\/lsi.asulaw.org\/softlaw\/report\/appendix-4-codebook-for-database\/\">Appendix 4 \u2013 Codebook for database<\/a><\/li>\n        <\/ul>\n    <\/div>\n\n          <a href=\"https:\/\/lsi.asulaw.org\/softlaw\/wp-content\/uploads\/sites\/7\/2022\/08\/final-database-report-002-compressed.pdf\" class=\"is-style-icon-button\"><span class=\"fas fa-download\"><\/span> Download the Report<\/a>\n    \n<\/div><\/div>\n<\/div>\n","protected":false},"featured_media":0,"parent":160,"menu_order":0,"template":"","acf":[],"_links":{"self":[{"href":"https:\/\/lsi.asulaw.org\/softlaw\/wp-json\/wp\/v2\/report\/170"}],"collection":[{"href":"https:\/\/lsi.asulaw.org\/softlaw\/wp-json\/wp\/v2\/report"}],"about":[{"href":"https:\/\/lsi.asulaw.org\/softlaw\/wp-json\/wp\/v2\/types\/report"}],"up":[{"embeddable":true,"href":"https:\/\/lsi.asulaw.org\/softlaw\/wp-json\/wp\/v2\/report\/160"}],"wp:attachment":[{"href":"https:\/\/lsi.asulaw.org\/softlaw\/wp-json\/wp\/v2\/media?parent=170"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}