The United States and the European Union agreed on a series of “common principles” regarding artificial intelligence (AI), semiconductor chip shortages, and a range of investment and competition issues during the inaugural Trade and Technology Council (TTC) meeting in Pittsburgh on September 29.

TTC is a new forum, launched by American and European leaders at the US-EU Summit in June of this year, designed to deepen economic ties, coordinate digital policy and ensure that disputes are resolved swiftly and efficiently.

A Joint Statement issued at the conclusion of the meeting contains a specific set of principles regarding AI that point toward greater collaboration in this important area. The US and the EU committed to “develop and implement AI systems that are innovative and trustworthy and that respect universal human rights and shared democratic values, explore cooperation on AI technologies designed to enhance privacy protections, and undertake an economic study examining the impact of AI on the future of our workforces,” according to a White House Fact Sheet.

The US and the EU have at times been at odds over regulatory approaches to the rapid and disruptive technological changes that have transformed domestic and international commerce over the past decade. For example, the two sides have yet to come to terms on a shared charter of privacy rules, even as the EU’s General Data Protection Regulation (GDPR) has emerged as a model for privacy laws elsewhere.

The principles outlined at the Pittsburgh summit demonstrate the recognition by leaders on both sides of the Atlantic that the opportunities and challenges posed by AI and related technologies call urgently for a proactive and unified effort to adopt clear standards based on the mutual interests and values of two of the world’s richest and most technologically advanced regions.

Structure and priorities

The TTC established working groups to pursue ongoing cooperative efforts in the following ten areas:

  1. Technology standards
  2. Climate and clean tech
  3. Secure supply chains
  4. Information and communication technology and services (ICTS) security and competitiveness
  5. Data governance and technology platforms
  6. Misuse of technology threatening security and human rights
  7. Export controls
  8. Investment screening
  9. Promoting small- and medium-sized enterprises (SME) access to and use of digital tool
  10. Global trade challenges

Working Group 6, with responsibility for combating the misuse of technology to threaten security and human rights, has been tasked with collaborating on projects furthering the development of trustworthy AI, among a wide range of assignments.[1]

Acknowledging AI’s potentially transformative economic benefits for individuals, industries and societies, while seeking to guard against inherent risks in the misuse of this still emerging technology, US and EU officials “affirm their willingness and intention to develop and implement trustworthy AI and their commitment to a human-centered approach that reinforces shared democratic values and respects universal human rights, which they have already demonstrated by endorsing the OECD [Organisation for Economic Co-operation and Development] Recommendation on AI.”

Areas of cooperation and concern

In the joint statement, the two sides underlined that policy and regulatory measures should be based on, and proportionate to, the risks posed by the different uses of AI. “We are committed to working together to foster responsible stewardship of trustworthy AI that reflects our shared values and commitment to protecting the rights and dignity of all our citizens. We seek to provide scalable, research-based methods to advance trustworthy approaches to AI that serve all people in responsible, equitable, and beneficial ways.”

Specific areas of cooperation identified in the document include:

  • Translating common values into tangible action and cooperation for mutual benefit.
  • Responsible stewardship of trustworthy AI, implementation of the OECD Recommendation and development of a mutual understanding on the principles underlining trustworthy and responsible AI.
  • Ongoing discussion of measurement and evaluation tools and activities to assess the technical requirements for trustworthy AI, including accuracy and bias mitigation.
  • Collaboration to explore better use of machine learning and other AI techniques towards desirable impacts, including enhanced privacy protections.
  • Ongoing economic studies examining the impact of AI on the future of the nations’ workforces, with attention to outcomes in employment, wages, and the dispersion of labor market opportunities.

“Social scoring” – essentially complex systems of surveillance and data collation that allow governments or other entities to create unified record systems so that individuals or organizations can be tracked and evaluated for trustworthiness – is specifically opposed in the principles as an example of uses “that do not respect” democratic values and human rights.

The US and EU “have significant concerns that authoritarian governments are piloting social scoring systems with an aim to implement social control at scale. These systems pose threats to fundamental freedoms and the rule of law, including through silencing speech, punishing peaceful assembly and other expressive activities, and reinforcing arbitrary or unlawful surveillance systems,” according to the statement. This appears to be alignment against the People’s Republic of China developing a social credit system that uses various forms of surveillance and AI to determine what “score” citizens receive based on their behaviors. The system can be used to provide punishments or rewards. It would be prohibited specifically by Article II of the draft EU Artificial Intelligence Regulation. The practice has been criticized widely in the US for infringing on privacy and being a tool for comprehensive surveillance and suppression of dissent, and is thus an area of easy agreement.

Divergent paths possibly converging

The US and the EU, together home to 780 million people, have not always been completely in synch in their regulatory approaches to AI. But the extensive economic ties and deep cultural affinities between the two blocs means that each side pays close attention to what is happening on the other side of the Atlantic, which is noted in the joint statement.

The EU has developed a risk-based regulatory framework for AI that defines high-risk uses of AI, which are to be subject to a number of requirements. The EU also supports a number of research, innovation and testing projects on trustworthy AI as part of its AI strategy.

On the US side, the National Institute for Standards and Technology (NIST, based within the Commerce Department) is developing an Artificial Intelligence Risk Management Framework (AI RMF) as part of the National AI Initiative. NIST is currently reviewing comments received through its Request for Information (RFI) and plans to hold a virtual workshop on the framework proposal October 19-21.

AI Bill of Rights initiative in the United States

A week after the Pittsburgh Summit, the Biden Administration announced an AI Bill of Rights initiative that appears likely to lead to movement in the US on the principles. In a Wired op-ed, the Director and the Deputy Director of the White House Office of Science and Technology Policy (OSTP) announced that OSTP will be developing a “bill of rights” to protect against potential harmful consequences of AI. They specifically noted that these rights might include:

your right to know when and how AI is influencing a decision that affects your civil rights and civil liberties; your freedom from being subjected to AI that hasn’t been carefully audited to ensure that it’s accurate, unbiased, and has been trained on sufficiently representative data sets; your freedom from pervasive or discriminatory surveillance and monitoring in your home, community, and workplace; and your right to meaningful recourse if the use of an algorithm harms you.[2]

They note that this bill of rights could be enforced through federal procurement standards, requiring federal contractors to use technologies that adhere to this “bill of rights,” or enacting new laws and implementing regulations to fill gaps.

Several of these avenues are squarely within executive branch authority, so this effort appears likely to change US requirements for AI. The authors also note that states might choose to adopt similar practices. Indeed, a bill that failed this year in California AB 13, would have required extensive reporting by AI vendors of potential discriminatory risks of their technologies.

Cooperation on best practices – OECD’s recommendations

Approved by member countries in 2019, the aforementioned OECD recommendations and a related series of Human-Centred AI Principles identify five complementary values-based principles for the responsible stewardship of trustworthy AI:

  • AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
  • AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards – for example, enabling human intervention where necessary – to ensure a fair and just society.
  • There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
  • AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
  • Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.

To achieve these goals, the OECD provided five recommendations to governments:

  • Facilitate public and private investment in research & development to spur innovation in trustworthy AI.
  • Foster accessible AI ecosystems with digital infrastructure and technologies and mechanisms to share data and knowledge.
  • Ensure a policy environment that will open the way to deployment of trustworthy AI systems.
  • Empower people with the skills for AI and support workers for a fair transition.
  • Co-operate across borders and sectors to progress on responsible stewardship of trustworthy AI.

Though not legally binding, the OECD recommendations are highly influential in shaping policies and regulatory approaches in the member nations.

Finally, at the level of good practices, the US and EU are founding members of the Global Partnership on Artificial Intelligence, which brings together a coalition of like-minded partners seeking to support and guide the responsible development of AI that is grounded in human rights, inclusion, diversity, innovation, economic growth, and societal benefit.