Welcome to this week's issue of AI: The Washington Report, a joint undertaking of Mintz and its government affairs affiliate, ML Strategies.

The accelerating advances in artificial intelligence (“AI”) and the practical, legal, and policy issues AI creates have exponentially increased the federal government’s interest in AI and its implications. In these weekly reports, we hope to keep our clients and friends abreast of that Washington-focused set of potential legislative, executive, and regulatory activities.

This issue covers the Federal Trade Commission’s (“FTC” or Commission”) preauthorization of a compulsory process in investigations relating to AI and a new AI bill supported by a bipartisan group of Commerce Committee senators. Our key takeaways are:

  1. The FTC’s announcement fits into a broader push by the Biden administration and the Commission itself to establish the FTC as a leading AI regulator.
  2. Separately, on November 15, 2023, a bipartisan group of senators released the Artificial Intelligence (AI) Research, Innovation, and Accountability Act (“AIRIA Act”). The AIRIA Act’s provisions would seek to encourage AI innovation and regulate harmful AI uses. While we do not expect the imminent passage of any AI legislation, the AIRIA Act is significant in that it represents a consensus among a bipartisan group of senators.

FTC Omnibus Resolution on CIDs in Investigations Related to AI

On November 21, 2023, the Federal Trade Commission (“FTC” or “Commission”) announced that it had approved a resolution that would “streamline FTC staff’s ability to issue civil investigative demands (CIDs), which are a form of compulsory process similar to a subpoena,” by preapproving the use of a compulsory process, “in investigations relating to AI, while retaining the Commission’s authority to determine when CIDs are issued.” This resolution will be in effect for 10 years and was approved unanimously by all three currently serving commissioners. The Commission, under Chair Lina Khan, has used this technique in other industries.

This act continues an increasing drumbeat encouraging the FTC to utilize its existing authority in the AI space. In public statements and business guidance posts, the FTC has asserted that the increasing integration of AI into products and services will bestow benefits, but also conceivably inflict harm on consumers. For instance, the FTC has asserted that while AI could allow disabled artists to enter the entertainment industry, AI could also be used by companies to deceive or discriminate.

Given the perceived risk posed by AI to consumer welfare, four leading consumer protection agencies released a joint statement in April 2023 vowing to “vigorously use our collective authorities to protect individuals’ rights, regardless of whether legal violations occur through traditional means or advanced technologies.”

In October 2023, the Biden administration provided explicit encouragement to the FTC to further its regulatory activities in the realm of AI. The Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence encourages the FTC to consider “whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act…to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.”

Significantly, the approval of the FTC’s November 2023 omnibus resolution will provide the springboard for the Commission to increase its investigatory vigor against AI companies for potentially illegal conduct or unfair methods of competition, moving the FTC closer to its stated goal of becoming a leading AI regulator. The FTC has already utilized CIDs in connection with an investigation of ChatGPT developer OpenAI.

Bipartisan Group of Commerce Committee Senators Introduce Wide-Ranging AI Bill

On November 15, 2023, members of the Senate Committee on Commerce, Science, and Transportation Senators Amy Klobuchar (D-MN), John Thune (R-SD), Roger Wicker (R-MS), John Hickenlooper (D-CO), Shelley Moore Capito (R-WV), and Ben Ray Luján (D-NM) introduced the Artificial Intelligence (AI) Research, Innovation, and Accountability Act (“AIRIA Act”).

The AIRIA Act seeks to both regulate certain harmful uses of AI and encourage AI research and innovation.

Regulating Harmful Uses of AI

The AIRIA Act would establish certain transparency requirements and create new AI-related standards in order to address some of the harmful uses of AI technologies.

  • Mandating that internet platforms of a certain size utilizing generative AI systems provide “clear and conspicuous” notice to users.
  • Establishing that entities deploying certain high-impact artificial intelligence systems must submit annual reports describing “the design and safety plans for the artificial intelligence system” to the Secretary of Commerce.
  • Directing the National Institute of Standards and Technology (“NIST”) to create recommendations to federal agencies for risk management of high-impact artificial intelligence systems.
  • Establishing an Artificial Intelligence Certification Advisory Committee that would create standards for the testing, evaluation, validation, and verification of certain “critical-impact artificial intelligence systems,” including systems that collect biometric data, manage critical infrastructure, or are involved in criminal justice.

Encouraging AI Innovation and Research

  • Directing the Commerce Department to carry out research to facilitate the development of means to reliably detect AI-generated content.
  • Asking NIST to create standards for the detection of anomalous behavior on the part of AI systems.
  • Directing the Comptroller General to create a study on barriers to the usage of AI in government.
  • Establishing a working group “relating to responsible education efforts for artificial intelligence systems.” This working group would identify education programs “that may be voluntarily employed by industry” to inform consumers about artificial intelligence systems.