As artificial intelligence (“AI”) continues to revolutionize industries around the world, its impact on international arbitration has become an inevitable and crucial subject of discussion.
The Silicon Valley Arbitration and Mediation Center (“SVAMC”), a vanguard in technology-related dispute resolution, recently released draft guidelines to navigate this complex intersection.
Dated August 31, 2023, these draft guidelines not only demonstrate SVAMC’s responsiveness to technological advancements but also its leadership in shaping the ethical and legal frameworks that will govern the use of AI in arbitration (the “Draft Guidelines”). The Draft Guidelines were opened for public consultation, emphasizing SVAMC’s dedication to inclusivity and broad-based stakeholder input.
The Draft Guidelines are poised to function as a seminal blueprint for the coherent incorporation of AI into arbitration frameworks globally. They systematically delve into cardinal principles including transparency, accountability, and fairness, thereby engendering the conditions for an efficacious and equitable arbitration ecosystem.
Summary of the Guidelines
Structured into four distinct segments, the Draft Guidelines incorporate paradigms of both compliant and non-compliant utilization of AI within arbitration settings, alongside model clauses for incorporation into procedural orders.
The introductory segment delineates the ambit of applicability of the Draft Guidelines, elucidates the contextual interpretation of AI vis-à-vis the Draft Guidelines, and reaffirms the non-abrogation of any extant mandatory legal stipulations.
Chapter 1 expounds upon general guidelines, germane to all stakeholders involved in international arbitration:
- Guideline 1(1) discusses understanding the uses, limitations, and risks of AI applications.
- Guideline 1(2) discusses safeguarding confidentiality; and
- Guideline 1(3) discusses disclosure and protection of records.
Notably, within Guideline 1(3), two alternative disclosure mechanisms are proposed. Option A predicates disclosure upon the functional utilization of AI and other germane criteria. Conversely, Option B mandates disclosure when AI materially influences the arbitration, particularly in the preparation of submissions, expert opinions, and other pivotal documents.
Option A: “… depending on the function for which such tool is used and other relevant facts.”
Option B: disclosures must be made when a party “uses AI tools in the preparation of submissions, expert opinions or other documents that are materially relied upon (including evidence and demonstratives)” and when “the use of such AI tools could have an [impact / a material impact] on the proceedings and/or their outcome.”
Chapter 2 furnishes protocols directed towards the parties and their respective representatives:
- Guideline 2(4) discusses the duty of competence or diligence in the use of AI.
- Guideline 2(5) discusses the respect for the integrity of the proceedings and evidence.
Chapter 3 outlines the guidelines for arbitrators:
- Guideline 3(6) discusses the non-delegation of decision-making responsibilities.
- Guideline 3(7) discusses the respect for due process.
Examples of Compliant and Non-Compliant Uses of AI in Arbitrations
Page 20 of the Draft Guidelines enumerates practical vignettes of both compliant and non-compliant AI applications within each guideline, albeit stressing the necessity for case-specific determinations. On the latter, the Draft Guidelines note that whether the use of AI in arbitration in a given case is appropriate or not will need to be determined on a case-by-case basis.
For Guideline 1(1), a compliant use would be:
“Using a specialised AI tool to conduct research on potential arbitrators or experts for a case, being mindful of the AI tools’ limitations and evaluating the results accordingly.”
Whereas a non-compliant use under Guideline 1(1) would be:
“Using AI tools to select arbitrators or experts for a case without human input and without assessing the AI tool’s selection critically and independently or controlling for biases and other limitations.”
For arbitrators, a compliant use of AI under Guideline 3(6) would be:
“Using an AI tool capable of providing accurate summaries and citations to create a first draft of the procedural history of a case, or generate timelines of key facts, and then double-checking accuracy of the AI tools’ output with underlying sources and making other appropriate edits.”
Whereas a non-compliant use under Guideline 3(6) would be:
“Using an AI tool to provide an assessment of the parties’ submissions of evidence and incorporate such output into a decision without conducting an independent analysis of the facts, the law and the evidence to make sure it reflects the arbitrator’s personal and independent judgement.”
The following is the model clause for inclusion in procedural orders, as set forth in the Draft Guidelines:
“The Tribunal and the parties agree that the Silicon Valley Arbitration and Mediation Center’s Guidelines on the Use of Artificial Intelligence in International Arbitration (SVAMC AI Guidelines) shall apply as a reference framework to all participants in this arbitration proceeding.”
Navigating the Delicate Equilibrium
The Draft Guidelines delineate a precarious balance between compliant and non-compliant AI applications within the arbitration context. Upon their formalization and subsequent implementation, parties and arbitrators alike will find themselves obligated to exercise utmost circumspection in AI utilization, erring on the side of extensive disclosure as opposed to restraint.
In summation, the Draft Guidelines stand as a robust cornerstone for a comprehensive regulatory framework that aligns AI and arbitration. As AI-driven advancements and arbitral institutions mature symbiotically, it is anticipated that analogous regulatory guidelines will be promulgated across diverse jurisdictions.