Michael Bahar, Rachel M Reid, Mary Jane Wilson-Bilik and Ronald Zdrojeski, Eversheds Sutherland

This is an extract from the 2024 Edition of GIR's The Americas Investigations Review. The whole publication is available here.

This is an Insight article, written by a selected partner as part of GIR's co-published content. Read more on Insight

In summary

In the absence of clear AI-specific legislation or regulation in the United States, companies should neither heedlessly charge ahead nor timidly wait for greater clarity. Rather, as regulators use existing authorities and private litigants use old laws to bring suits centred on the newest technologies, companies should strongly consider establishing a written internal AI self-governance framework. This framework would institutionalise a focus on accountability, accuracy, fairness, security and other principles while developing or integrating AI tools, and it should include clear contracting guidelines. Through effective governance and responsible practices, companies can leverage AI’s potential while greatly mitigating class action and regulatory risks.

Discussion points

  • State of AI legislation in the United States and abroad
  • Governance and mitigating regulatory and litigation risk
  • Pillars of an AI self-governance programme

Referenced in this article

  • Electronic Communications Privacy Act
  • Computer Fraud and Abuse Act
  • California Invasion of Privacy Act
  • California Unfair Competition Law
  • Illinois Biometric Information Privacy Act
  • Illinois Consumer Fraud and Deceptive Business Practices Act
  • SAFE Innovation Framework for AI Policy
  • EU Artificial Intelligence Act

As companies of all sizes across industries begin to accept, if not embrace, the new world of artificial intelligence (AI), in-house legal departments and operational risk management personnel should be aware of the potential risks associated with using this awe-inspiring technology. Indeed, US regulators are already strongly signalling their intention to regulate first and allow space for innovation later. And government regulators are not alone; private litigants have begun to file class action lawsuits against companies deploying AI technologies.

The best way to insulate from these risks is not just to charge ahead in the belief that the absence of legislation means the absence of boundaries, or to timidly wait for definitive guidance and limits. Rather, it is through an effective self-governance framework – akin to a Mayflower Compact for AI. Acting now to establish an enterprise-wide self-governance framework will put companies on course to successfully navigate and harness this exciting technology, while avoiding its shoals.

From ebullience to trepidation

The sight of new land is exciting, but as the shoreline comes into view, so do the potential challenges.

In March 2023, New York Times columnist Thomas Friedman wrote about AI in terms of a ‘Promethean Moment’, akin to when fire came down from Mount Olympus to humankind on Earth with its ‘awe-inspiring’ potential to ‘solve seemingly impossible problems’. Two months later, however, Friedman began to adopt a different mythological analogy, referring to generative AI as ‘Pandora’s Box’, Zeus’ punishment to mortals and the god who stole fire for them. If we approach generative AI just as ‘heedlessly’ as we did Web 2.0 technologies, Friedman wrote, ‘Oh, baby, we are going to break things faster, harder and deeper than anyone can imagine.’

Two days later, President Biden issued a statement that similarly emphasised AI’s risks over its rewards and the government’s determination to regulate at once: ‘AI is one of the most powerful technologies of our time, but in order to seize the opportunities it presents, we must first mitigate its risks.’

Key US federal regulators also made clear their commitment to investigate and regulate AI in a declarative statement on 25 April 2023, which concluded with their joint ‘pledge’ to ‘vigorously use our collective authorities to protect individuals’ rights regardless of whether legal violations occur through traditional means or advanced technologies’.

Private litigants are gearing up as well, with landmark class actions filed the last week of June 2023 alleging privacy violations, intellectual property infringement, illegal wiretapping, unfair competition and a slew of other complaints against OpenAI, the maker of ChatGPT, and other defendants. At the core of plaintiffs’ arguments is the alleged ‘rush’ to market ‘without implementing proper safeguards or controls to ensure that they would not produce or support harmful or malicious content and conduct that could further violate the law, infringe rights, and endanger lives’. The existing laws alleged to have been violated include the Electronic Communications Privacy Act,[1] the Computer Fraud and Abuse Act,[2] the California Invasion of Privacy Act,[3] the California Unfair Competition Law,[4] the Illinois Biometric Information Privacy Act[5] and the Illinois Consumer Fraud and Deceptive Business Practices Act.[6]

Importantly, US federal legislation to set AI guardrails is now on the horizon. On 22 June 2023, the powerful Senate Majority Leader, Charles Schumer, proposed a new bipartisan framework, the SAFE Innovation Framework for AI Policy, to encourage AI innovation while advancing ‘security, accountability, foundational values and explainability’. In autumn 2023, Schumer will convene a series of AI Insights Forums that will solicit input on legislative proposals from industry, consumers and researchers, addressing topics such as privacy, intellectual property, workforce and national security, as well as the importance of AI innovation. Schumer has already formed a bipartisan leadership group and instructed committee chairs to identify areas where they can work on AI legislation in a bipartisan fashion. Schumer’s goal is to pass legislation in a matter of months, not years.

In addition, governing bodies, regulatory authorities, lawmakers and litigants across the United States are keeping a close eye on legal developments abroad. The EU Artificial Intelligence Act, which is poised to become the world’s first comprehensive AI law, would regulate the application of AI using a risk-based approach, similar to cybersecurity regulation. AI systems applied to activities that pose minimal risk (to individuals, communities, the environment, etc) would essentially be unregulated, while those AI systems applied to limited or high-risk activities would be subject to increasing levels of regulation. The use of AI systems in ways that pose ‘unacceptable risk’ – those systems considered to be a threat to people – would be banned (with limited exceptions).

Outside of the EU, the United Kingdom has published a white paper titled ‘A pro-innovation approach to AI regulation’. The proposed UK approach relies on collaboration between the government, regulators and business, and lays out a flexible framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.

While the legislative framework for AI governance in other parts of the globe remains in various stages of development, there is a growing degree of regulatory convergence around the underlying principles of AI, especially the OECD’s 2018 AI Principles, and the need for companies to develop internal controls to identify and mitigate AI’s risks. Regulators – and many companies – also recognise that existing authorities, including, if not especially, privacy laws, already provide strong grounds for enforcement activity.

We discuss the existing authorities in the United States further below, but in short, whether companies are on the leading or following edge of this revolutionary technology, sound self-governance and risk management processes will be key to maximising success and to mitigating the impending enforcement and litigation risks.

Governance and mitigating regulatory and litigation risk

Legal and risk management departments witnessing this rapidly evolving risk profile should take steps now to formalise their approach to self-governance through the adoption of a comprehensive AI programme tailored to their company and its unique risk profile. Adopting detailed principles and policies for AI development, use and deployment, instituting internal guardrails and oversight processes, and providing robust training, will better position companies to comply with regulatory developments and to mitigate new legal and operational sources of risk.

There is not yet any AI-specific legislation in the United States, but regulators are poised to strike, increasing the urgency of self-governance. For example, the Federal Trade Commission (FTC) issued a report evaluating the use and impact of AI in combatting online harms identified by Congress. The report outlined significant concerns that AI tools can be inaccurate, biased and discriminatory by design and incentivise relying on increasingly invasive forms of commercial surveillance.

The FTC also warned market participants that violations of the FTC Act could result from the use of automated tools that have discriminatory impacts, the assertion of claims about AI that are not substantiated, or the deployment of AI technology before appropriate steps are taken to assess and mitigate risks. Finally, the FTC has required firms to destroy algorithms or other work products that were trained on data that should not have been collected.

In addition, the Consumer Financial Protection Bureau, which regulates financial institutions, published a circular confirming that federal consumer financial laws and adverse action requirements apply regardless of the technology used. The circular also made clear that the fact that the technology used to make a credit decision is too complex, opaque or new is not a defence for violating these laws. Notably, the circular includes the following statement:

Creditors who use complex algorithms, including artificial intelligence or machine learning, in any aspect of their credit decisions must still provide a notice that discloses the specific principal reasons for taking an adverse action. Whether a creditor is using a sophisticated machine learning algorithm or more conventional methods to evaluate an application, the legal requirement is the same: Creditors must be able to provide applicants against whom adverse action is taken with an accurate statement of reasons.[7] The statement of reasons ‘must be specific and indicate the principal reason(s) for the adverse action.’[8]

The Department of Justice (DOJ) filed suit against a social media company, alleging that through its design and its use of AI for ad targeting categories based on user demographics or other characteristics, it had ‘intentionally discriminated on the basis of race, color, religion, sex, disability, familial status, and national origin’. Significant for DOJ was the company’s apparent intent, which means that regulators will look to external and internal statements to assess whether the company’s AI use is inappropriate. The case was eventually settled, and the settlement included requirements to stop using certain AI tools and to engage an independent third party to assess new AI systems for bias and discrimination.

In a separate case, the Department of Justice’s Civil Rights Division filed a statement of interest in federal court explaining that the Fair Housing Act applies to algorithm-based tenant screening services. In reference to such statement, the Assistant Attorney General of the Justice Department’s Civil Rights Division, Kristen Clarke, stated that ‘housing providers and tenant screening companies that use algorithms and data to screen tenants are not absolved from liability when their practices disproportionately deny people of color access to fair housing opportunities.’ Clarke went on to state that ‘this filing demonstrates the Justice Department’s commitment to ensuring that the Fair Housing Act is appropriately applied in cases involving algorithms and tenant screening software.’

The Equal Employment Opportunity Commission (EEOC), in addition to its enforcement activities on employment discrimination related to AI and automated systems, issued a technical assistance document explaining how the Americans with Disabilities Act (ADA) applies to the use of software, algorithms and AI to make employment-related decisions about job applicants and employees. In this document, the EEOC explained that the most common ways an employer’s use of algorithmic decision-making tools could violate the ADA are by:

  • failing to ‘provide a “reasonable accommodation” that is necessary for a job applicant or employee to be rated fairly and accurately by the algorithm’;
  • relying on ‘an algorithmic decision-making tool that intentionally or unintentionally “screens out” an individual with a disability, even though that individual is able to do the job with a reasonable accommodation’; and
  • adopting ‘an algorithmic decision-making tool for use with its job applicants or employees that violates ADA’s restrictions on disability-related inquiries and medical examinations’.

In New York City, Local Law 144 now threatens civil enforcement of between US$500 and US$1500 per day for employers located in New York City or with candidates or employees in the city, who use automated employment decision tools to evaluate job candidates or employees for employee decision purposes if those tools have not been audited for bias and if notice and website disclosure requirements are not met. On 6 April 2023, the final rules promulgated pursuant to Local Law 144 were adopted, with enforcement beginning on 5 July 2023.

Self-governance helps avoid these risks by requiring active and sustained systems and processes to monitor, manage and mitigate various sources of risk throughout the AI lifecycle. Importantly, while ethical pronouncements and principled statements of values are an important first step, standing up accountability mechanisms and structuring hierarchies of decision-making, review and oversight are pivotal to cultivating a true culture of compliance and risk management around the use of AI.

Whether a company is currently using machine learning algorithms or looking to implement natural language processing or another generative AI system, companies should strive to align their AI self-governance programme with their current and possible future uses of both predictive and generative AI technologies.

An AI self-governance programme should include, at a minimum, four primary pillars – governance, assessment and monitoring, privacy and data security and third-party contracts.

Pillar 1: AI governance

Given the complexity of AI systems and their associated risks, governance should start at the highest level within an organisation, such as the board of directors or a committee thereof. The board or responsible committee should designate the senior leader at the company with accountability for the AI programme, and the senior leader should then assign responsibilities to the appropriate functional leaders. A set of robust policies and procedures should be drafted and implemented, including the company’s processes for designing and developing AI systems, conducting AI risk assessments, cataloguing AI systems, evaluating data sets used for AI and contracting for third-party AI systems. AI governance documents should also clearly set forth decision-making authority for AI systems, including which decisions are reserved for senior management and the board of directors.

Pillar 2: assessment and monitoring of AI systems

Prior to developing or deploying any AI systems, companies should adopt standards and principles that will guide how AI systems will be assessed and monitored, including how the company will identify and mitigate potential risks and liabilities.

These AI assessment principles could include:

  • reliability and accuracy: AI systems should perform consistently and provide accurate results across varying conditions;
  • safety: AI systems should prioritise human safety and avoid causing harm. Any risk associated with the use of AI systems should be mitigated to the greatest extent possible;
  • transparency and explainability: the AI systems should be transparent in their operations, and their decisions should be explainable and interpretable to users; and
  • fairness: AI systems should operate in a fair manner, avoiding biases that could lead to discriminatory outcomes. This includes ensuring that AI-assisted decision-making is equitable and does not disadvantage any individual or group.

The decision to deploy and integrate an AI system into a business function should be made thoughtfully based on the specific needs of the organisation and with an understanding of the requirements for ongoing testing and monitoring. Both the algorithmic model itself, as well as the data inputs, should be assessed regularly for potential non-compliance with the principles adopted by the company, as well as non-compliance with existing laws, such as those regarding privacy, intellectual property, discrimination and consumer protection.

For companies actively bringing AI-driven tools to the market, testing throughout the product development lifecycle should prioritise verification, modification and transparent reporting of results at various stages of training the model. Companies that can label and document with confidence the safety, security, reliability and validity of their models, and demonstrate their commitment to responsibly built AI solutions will promote greater trust with regulators, consumers and users alike.

As new tools and products enter the AI market, companies should continuously adapt their governance frameworks, re-evaluate risk profiles, manage potential liabilities and align their oversight systems with best practices. As looming regulations come into effect, companies that proactively implement accountability structures and responsibly manage their AI product integrations and offerings will be able to navigate new compliance demands efficiently and effectively.

Pillar 3: privacy and data security

Data – and, principally, personal data – is the lifeblood of AI systems. Generative AI systems, in particular, rely on vast amounts of data to learn and make decisions, and companies should consider privacy law compliance and the impact on individual privacy in the context of AI. The complex patchwork of existing privacy laws in the United States regulates how personal data is collected, used, processed and shared in any context, including throughout an AI system’s lifecycle, as well as the rights of the individuals to whom personal data relates. Performing a data privacy impact assessment on all data collected by or shared with an AI system can help companies evaluate each AI system’s compliance with applicable privacy laws. In addition, companies should implement and maintain reasonable administrative, physical and technical safeguards to protect all personal data collected by and shared with an AI system, as required by applicable data protection laws and regulations. Security considerations around AI should also include data loss prevention and the protection of propriety technology and confidential information.

All data used by an AI system to fulfil its purposes or to improve or advance the AI system’s capabilities must be lawfully acquired, which typically requires the informed consent of the data subject. Providing clear, comprehensive and transparent disclosures about the potential uses of data and obtaining informed, opt-in consent from the data subjects will help shield companies from liability and build trust with consumers. Imposing internal restrictions on the use of certain types of data – such as sensitive personal data – in connection with an AI system can further protect against potential privacy violations.

Pillar 4: allocating risk and responsibility in third-party contracts

AI systems are often marketed as bespoke solutions for companies. Therefore, it is important for both developers and licensors of AI systems, as well as those companies acquiring third-party AI systems, to assign responsibilities and allocate liability in a written contract.

As the legal and regulatory landscape specific to AI continues to evolve quickly, establishing clear roles and responsibilities of the parties – including with respect to compliance with future laws and regulations – can help avoid disputes and incentivise both sides to engage in responsible AI development and adoption from the outset. The contract should include, at a minimum, provisions addressing the following:

  • restrictions on any external data sets and other inputs used with the AI system, including those that may be restricted due to privacy or intellectual property law considerations;
  • requirements around transparency and explainability of the AI system;
  • security and resiliency standards both for the AI system and for any integrated or inter-connected systems and technology;
  • responsibility for compliance with applicable privacy and data protection laws;
  • ownership rights in the inputs and outputs of the AI system and any restrictions on use of the same;
  • ownership of intellectual property rights in the AI system and liability for any infringement of third-party intellectual property rights as a result of the use or operation of the AI system;
  • rights and obligations of the parties with respect to changes in law; and
  • responsibility for ongoing testing and monitoring of the AI system, including testing and monitoring for fairness and accuracy, transparency and explainability, security and safety, and potential bias or discrimination.

Contract language should also take into account the ongoing development of new international AI standards that will underpin an evolving assurance infrastructure, standards such as ISO/IEC 42005 on AI System Impact Assessments, as well as any licensing requirements that may be put in place for highly capable foundation models, which include those used in generative AI and their cloud providers. Special provisions will be required for any contracts involving ‘frontier’ models, which are models having different architectures or mixes of scale and capabilities than the average model that could pose unique risks and are less well understood by the research community. Consideration should also be given to requiring vendors to certify their alignment with recognised frameworks for AI risk assessment, such as the National Institute of Standard and Technology’s AI Risk Management Framework.

Parties on both sides of an AI system transaction should also consider insurance coverage to protect against potential losses arising from the use of the AI system.

Conclusion

The shoreline of the new world of AI is enticing yet craggy; the wilderness is not as ungoverned as it may appear. Despite the absence of AI-specific legislation, regulators and private litigants are hyper-focused on testing whether a company has an appropriate AI governance programme, one that can adequately explain the AI system, its inputs and its outputs, its risks as well as its opportunities.

Therefore, companies should resist any temptation to race to beat the law, or to wait on the sidelines for bright-line clarity, and instead implement a reasonable, thoughtful and responsible AI self-governance programme.

Ultimately, regulators and private litigants will be looking for accountability the same way they do with privacy and cybersecurity. Who, in any particular organisation, ‘owns’ AI, and who is charged with ensuring that AI systems are operating responsibly, not just efficiently?

Indeed, efficiency cannot be AI’s sole goal. Rather, accuracy, fairness, reliability, predictability, explainability, security and resiliency should all share the stage, and regulators will expect clear guidelines and guardrails on these points, with humans responsible for each.

With a self-governance AI programme that translates principles into practice, companies can use effective governance, assessments, privacy and data security and third-party contracts to safeguard against the legal risks of AI, while pursuing new opportunities to integrate and innovate this powerful technology ethically, responsibly and profitably.

*The authors would like to acknowledge associate Chris Bloomfield and summer associate Jeremy Bloomstone for their contributions to the article.

Subscribe here for related content, breaking news and market analysis from Global Investigations Review.

Global Investigations Review provides exclusive news and analysis and other thought-provoking content for those who specialise in investigating and resolving suspected corporate wrongdoing.