Arbitral institutions and judicial bodies in the United States are rapidly embracing the benefits that generative artificial intelligence (GenAI) promises to bring to dispute resolution.
In the past year, US arbitral institutions, national and State bar associations, and judicial bodies have been prolific in publishing guidance aimed at helping users, lawyers, arbitrators and judges make the most of GenAI. As instances of the misuse of GenAI in legal proceedings proliferate – including hallucinations of fake authorities in legal briefs, the threat of falsified deepfake evidence, and allegations that arbitrators have outsourced award drafting to AI – the guidance also aims to promote the responsible use of AI tools.
Perhaps reflecting the rapidly changing AI landscape, the guidance generally avoids prescribing rigid detailed rules and instead adopts a broad principles-based approach, typically aligned to the existing professional and ethical obligations of practitioners and decision-makers.
We explore these guidelines below.
The AAA-ICDR guidance
In March 2025, the AAA-ICDR issued Guidance on Arbitrators’ Use of AI Tools that encourages arbitrators to adopt AI technology while adhering to their professional obligations under the AAA’s Code of Ethics for Arbitrators in Commercial Disputes (Code of Ethics) and the Code of Professional Responsibility for Arbitrators of Labor-Management Disputes. The guidance applies to all decision-making functions of the arbitrator, not only the preparation and drafting of the final award.
The key considerations in the Guidance include:
- Accuracy and reliability of Information: Arbitrators should critically evaluate and verify the outputs of AI tools to ensure they meet the required standards of accuracy and reliability. Cross-referencing against primary sources is required.
- Maintaining fairness and due process: Arbitrators should ensure their use of AI tools enhances the arbitration process without compromising fairness and due process. This guidance is in line with Canon I of the Code of Ethics.
- Independent decision-making: AI tools should support, not replace, the arbitrator’s judgment and expertise. Decisions should reflect the arbitrator’s independent evaluation and reasoning. This guidance is in line with Canon V of the Code of Ethics.
- Transparency with parties: Arbitrators should disclose their use of GenAI tools when such use materially impacts the arbitration process or the reasoning underlying their decisions.
In line with Canon IV of the Code of Ethics, the guidance also emphasizes the importance of confidentiality and data protection, urging arbitrators to use secure tools and platforms and to avoid using tools that do not guarantee data protection for confidential information.
Importantly, the guidance urges arbitrators to stay current with AI innovations and their relevance to arbitral proceedings, and to develop proficiency with AI tools.
The Guidance is one of a number of recent AI innovations adopted by the AAA-ICDR, including:
- AAAi Panelist Search: an internal GenAI-powered tool that assists case managers in selecting arbitrators and mediators by “min[ing] the comprehensive AAA-ICDR Roster to identify the most suitable matches for arbitration and mediation cases.”
- Clearbrief Partnership: aims to provide AAA panelists with AI-driven tools for document summarization, drafting assistance, and automated fact-checking. These tools integrate with other platforms and legal research systems, streamlining the drafting of arbitration awards and enhancing accuracy.
- ClauseBuilder AI (Beta): an AI-powered tool that aids in drafting arbitration and mediation clauses.
- AAAiLab: an initiative serving as a hub for innovation by focusing on the development and testing of AI applications for “Better Dispute Resolution Through AI.”
The American Bar Association’s Formal Opinion 512
In July 2024, the Standing Committee on Ethics and Professional Responsibility of the American Bar Association (ABA) released Formal Opinion 512, its first formal guidance addressing the use of GenAI by lawyers. The ABA’s formal opinions are issued by the Standing Committee on Ethics and Professional Responsibility. While not binding, they carry significant advisory weight on matters of interest to a broad range of attorneys.
The Opinion emphasizes that lawyers must adhere to their same core professional duties when engaging with GenAI tools, whether for legal research, drafting, or data analysis:
- Competence (Model Rule 1.1): Lawyers must understand the capabilities and limitations of the tools they use. This includes recognizing the risks of “hallucinations,” bias, and factual inaccuracies. Lawyers are expected to verify AI-generated content before relying on it. Interestingly, technological proficiency is not optional but a component of ethical competence under Model Rule 1.1, requiring lawyers to stay informed as these tools evolve.
- Confidentiality (Model Rule 1.6): To avoid unauthorized disclosure, lawyers must assess a tool’s data handling policies and obtain informed consent from clients before exposing sensitive information. Informed consent must be specific and cannot be satisfied by generic boilerplate language in engagement letters.
- Communications (Model Rule 1.4): Lawyers must communicate clearly with clients when GenAI use affects the means by which the client’s objectives are to be accomplished.
- Supervision (Model Rules 5.1 and 5.3): lawyers must supervise the use of GenAI by junior attorneys and non-lawyer staff in accordance with Model Rules 5.1 and 5.3. This includes ensuring that any AI-assisted work product is properly reviewed before use or submission.
- Candor (Model Rules 3.1, 3.3 and 8.4(c)): Lawyers have a duty of candor to tribunals, which prohibits submitting false or misleading information, e.g., hallucinated authorities or deepfake evidence. Lawyers must verify AI-generated content.
- Reasonable fees (Model Rule 1.5): This Rule requires a lawyer’s fees and expenses to be reasonable and includes criteria for evaluating whether a fee or expense is reasonable. Lawyers’ fees must reflect actual time spent and avoid charging clients for a lawyer’s learning curve with new technology or for general overhead related to embedded AI tools.
The ABA also underscores the importance of informed consent when using GenAI tools that may process client data. Lawyers should assess the risks associated with inputting confidential information into AI systems, especially those that are self-learning or cloud-based, and obtain explicit client consent when necessary.
The SVAMC Guidelines
In April 2024, the Silicon Valley Arbitration and Mediation Center (SVAMC) published its Guidelines on the Use of Artificial Intelligence in International Arbitration with the purpose of offering a “set of best practices on the use of AI in international arbitration.” They were the first of their kind and were intended to assist participants in arbitrations with navigating the potential applications and risks of AI.
The SVAMC Guidelines define AI as “computer systems that perform tasks commonly associated with human cognition, such as understanding natural language, recognizing complex semantic patterns, and generating human-like outputs.” This definition is broad enough to encompass both GenAI (which is capable of generating new human-like content using learned data patterns) and non-generative or traditional AI (which focuses on processing and analyzing data to provide predictions or insights).
The SVAMC Guidelines are framed as “guiding principles to all participants in [the] arbitration proceeding” and only apply to the extent the parties have so agreed and/or following a decision by an arbitral tribunal or institution to adopt them. To that end, they also provide a model clause for inclusion in procedural orders.
The SVAMC Guidelines are organized in three parts:
- Guidelines 1 to 3 for all participants in arbitrations who must understand the AI tool’s uses, limitations, and risks, safeguard confidentiality, and make disclosure decisions on a case-by-case basis.
- Guidelines 4 and 5 for parties and party representatives who must use AI competently, verify its output, and ensure it does not compromise the integrity of proceedings or evidence, for example by generating false evidence.
- Guidelines 6 and 7 for arbitrators who must not delegate decision-making to AI, must independently analyze facts, law, and evidence, and respect due process when using AI-generated information.
Developments in the U.S. courts
AI-Related Standing Orders
At the time of writing, more than 30 standing orders relating to AI have been issued by U.S. federal judges. These standing orders lay down rules for the use of AI in specific federal courts.
Typically, these standing orders emphasize the responsibility of the parties to ensure the accuracy of AI-generated content in legal documents, comply with all relevant standards and ethical obligations, and disclose the use of AI in their filings. Some orders also address confidentiality concerns, requiring parties to certify that AI use has not led to unauthorized disclosure of confidential information. A few orders prohibit the use of AI in preparing court filings, with sanctions for violations.
As Judge Brantley Starr—a District Judge for the Northern District of Texas and one of the first judges to issue such orders—has indicated, these orders were intended to educate and raise awareness until a point in time is reached when certification may not be needed if all practitioners “know about AI and bias and hallucination, and know what [they] should use it for and what we shouldn’t.” Judge Starr, among others, has since revoked his standing order, suggesting that some federal judges are satisfied that this point of awareness has been reached.
Proposal to Revise the Federal Rules of Evidence
The Federal Judicial Conference’s Advisory Committee on Evidence Rules is also considering revisions to the Federal Rules of Evidence (FRE) to address the use and potential risks of AI-generated evidence in litigation, most notably through a new Rule 707.
This new rule would require that machine-generated outputs meet the same standards as human expert testimony under the existing Rule 702, meaning the proponent must demonstrate that the AI methods are based on reliable principles and methods, applied consistently, and produce reproducible results. On June 10, 2025, following the recommendation of the Advisory Committee on Evidence Rules, the Committee on Rules of Practice and Procedure, the top judicial rulemaking body, held a vote deciding to publish the proposal for public comment.
Additionally, the Committee is considering—but not yet voting on—an amendment to Rule 901 to address deepfake evidence, which would require proponents to prove the authenticity of contested AI-generated media if there is a reasonable basis to believe it was fabricated.
The AI Task Force and Governance Framework
On May 15, 2025, the federal judiciary announced the establishment of an AI task force to explore policies and guidelines for AI use in courts. Aiming to complete its work by the end of 2026, the taskforce will determine the need to amend or establish policies on the judiciary’s use of AI.
Courts and judges have also been testing AI tools to assist with administrative tasks such as checking appeal timeliness and preparing for procedural conferences by using publicly available filings to generate summaries and timelines, but they caution against using AI for legal argument evaluation or decision-making.
Comment
The use of GenAI in litigation and arbitration has become widespread. In what appears to be a shift away from the initial wariness of the potential risks and pitfalls of using GenAI in legal proceedings, the recent guidance from US arbitral institutions and judicial bodies emphasizes the considerable efficiencies and benefits that can be gained from the responsible use of these tools. It is notable that industry and regulatory bodies are increasingly viewing the competent use of GenAI tools as a key competency for both lawyers and judges or arbitrators.
Of course, risks remain. Human intervention remains essential to address the challenges of confidentiality, accuracy, and the outsourcing of judicial decision-making. Individual human accountability remains at the center of efforts to regulate the use of GenAI. In most instances, existing ethical frameworks remain applicable and merely require adaptation to the emerging and expanding use of AI in legal practice. Where additional guidance is needed, a broad principles-based approach appears to be the policy tool of choice for rule makers.
As the landscape of GenAI continues to evolve rapidly, so too will the guidance accompanying it. US arbitral institutions and judicial bodies are clearly embracing the change.
