Artificial intelligence research and adoption continues to expand across many geographical regions and industries, including healthcare, financial services, mobility, energy, transportation and logistics. AI additionally holds the promise of promoting the public good, including helping to achieve the UN Sustainable Development Goals by 2030. Realising the full potential of AI depends upon, among other things, having an appropriate policy framework that enables AI technology to flourish in order to attain these many benefits while also providing appropriate safeguards against discriminatory bias and other unacceptable harms. It also depends upon having a sufficient amount of accessible and reliable data. As this Market Intelligence explains, stakeholders around the world are grappling with questions of how to establish appropriate policy frameworks to achieve these objectives and how to increase access to public data.
Artificial intelligence policy frameworks
Many jurisdictions have aligned trustworthy AI principles, but work continues in translating these principles into AI policies and practices. For example, 40 countries have adopted the Organisation for Economic Co-operation and Development’s (OECD) five AI principles (OECD AI Principles). These trustworthy AI principles include:
- using AI to stimulate inclusive growth, sustainable development and well-being;
- human-centred values and fairness;
- AI transparency and explainability;
- making AI secure, robust and safe throughout its life cycle; and
- accountability.
In June 2019, the G20 embraced the OECD AI Principles.
With respect to translating the AI principles into policies and practices, several multilateral organisations, in addition to the OECD, are facilitating these efforts with the hopes of achieving a certain level of global harmonisation to help encourage cross-border flow of technology and adoption. For example, the G7 countries have launched the Global Partnership on AI (GPAI) to help shape AI policy. The UN is engaged in similar activities too. In addition, several standard organisations, including the International Organization for Standardization and the International Telecommunication Union, are striving to create technical standards for AI.
In addition to engaging with multilateral organisations, several jurisdictions have started crafting their own AI policy frameworks. At present, no consensus has emerged with respect to the appropriate AI policy framework, even among jurisdictions that have adopted the OECD AI Principles and that desire a risk-based and proportionate approach to AI regulation. For example, in February 2020, the European Commission published a white paper (European Commission AI White Paper) proposing a risk-based and proportionate approach to AI regulation that would subject all high-risk AI to a pre-market conformity assessment requirement and other requirements. The United States, pursuant to its national AI strategy, issued draft guidance to federal agencies that sets forth a risk-based and proportionate approach for regulating private sector use of AI. However, in contrast to the European Commission AI White Paper, this draft US guidance embraces a more deregulatory approach. For example, the draft US guidance directs agencies to consider new regulation ‘only after they have reached the decision . . . that federal regulation is necessary’ and encourages federal agencies to consider non-regulatory alternatives.
Approaches with respect to AI have diverged for some particular applications. For instance, US lawmakers have expressed bipartisan interest in regulating facial recognition technology (FRT). Additionally, the US Federal Trade Commission has indicated FRT is an area of particular enforcement interest and the US National Institute for Standards and Technology has launched efforts to develop standards for these technologies, particularly in the area of bias. Some private entities have announced that they would end or pause sales of all FRT products to law enforcement. In Europe, European data protection authorities have issued FRT-specific guidance restricting law enforcement use of FRT in public places. The European Commission AI White Paper also proposes to treat FRT as high-risk AI, which would be subject to greater regulation.
In contrast, China has adopted FRT with far less reluctance than Europe and the United States. The Chinese government has attached significant importance to researching and developing facial recognition and biometric data technology to reap their efficiency benefits. For example, Chinese health officials widely used facial recognition as a contactless means of verifying people’s identities to help contain the covid-19 pandemic. Developing Chinese regulations governing FRT generally encourages greater use and integration of the technology, with less focus on individual rights.
Data policies
Given the importance of data to AI and other emerging technologies, several jurisdictions are taking steps to expand the amount of available data. The US Congress enacted the Open Government Data Act in 2019, which aims to make certain government data available for public and private sector use. The Trump administration has also emphasised the importance of making more government data available in connection with its national AI strategy and through its Federal Data Strategy.
Like the United States, the European Union also seeks to foster data sharing. The European Commission’s Communication on a European Strategy for Data proposes:
- a strategy for cross-sectoral governance framework for data access and use;
- investments strengthening of the European Union’s capabilities and infrastructures for hosting, processing and using data; and
- the development of common European data spaces in key sectors.
Many Middle Eastern countries have also expressed interest in data sharing. For example, the United Arab Emirates, as an emerging leader on AI governance in the region, has published a National Programme for Artificial Intelligence, which outlines the UAE’s intention to engage in data sharing and establish a collaborative partnership with India to increase investment for AI start-ups and promote research and development of AI technologies and services.
China also has indicated support for collaboration and data sharing to achieve AI’s full potential. China’s 2017 AI Development Plan encourages collaborative efforts in the development of AI and AI infrastructure and proposes open-source platforms that encourage data sharing of algorithms and other toolsets to enhance innovation in AI.
Conclusion
As with cellular phones and the internet decades ago, the development of AI technology has outpaced the development of the law in several key respects. However, as we witnessed with past technological revolutions, policymakers with quick strides are seeking to catch up.
At present, there is significant consensus around AI principles. However, translating these AI principles into policies remains a work in progress, particularly with respect to AI trustworthiness. But while approaches to this currently vary among jurisdictions, many of these jurisdictions share common goals. In addition, multilateral organisations, including the OECD, the UN Educational, Scientific and Cultural Organization, and the GPAI, continue to engage in efforts to forge more harmonisation and AI standards organisations also continue their work. Further developments are likely to come in 2021. It will be important for AI developers and deployers to stay abreast of these ongoing developments so they can continue to address them in their operations.
