The federal Court just released a notice to the parties and the profession regarding “The Use of Artificial Intelligence in Court Proceedings“. It will require all parties, lawyers and interveners to include a Declaration in all documents prepared for the purpose of litigation that are submitted to the Court that contains content created or generated by AI. The Declaration must state “Artificial intelligence (AI) was used to generate content in this document”. While there are good reasons for courts to be concerned about “hallucinations” in court documents, there does not seem to be cogent reasons for a declaration where there has been a human properly in the loop to verify the content generated. The requirement is overly broad and could actually impede the use of innovative legal tools.

The Notice explains the new disclosure requirement as follows:

The Court expects parties to proceedings before the Court to inform it, and each other, if they have used artificial intelligence to create or generate new content in preparing a document filed with the Court. If any such content has been included in a document submitted to the Court by or on behalf of a party or a third-party participant (“intervener”), the first paragraph of the text in that document must disclose that AI has been used to create or generate that content.

This Notice requires counsel, parties, and interveners in legal proceedings at the Federal Court to make a Declaration for AI-generated content (the “Declaration”), and to consider certain principles (the “Principles”) when using AI to prepare documentation filed with the Court. The Court offers below an explanation of why the Declaration and Principles are in the interests of justice, the specific type of AI to which this Notice applies, and how the Court will update its approach to the use of AI at the Court in the future.

The purpose of the new disclosure requirement according to the Notice is the fabrication of legal authorities and the use of generative AI as being an unreliable source. According to the Notice:

The Court recognizes that emerging technologies often bring both opportunities and challenges. Significant concerns have recently been raised regarding the use of AI in Court proceedings, including in relation to “deepfakes,” the potential fabrication of legal authorities through AI, and the use of generative decision-making tools by government officials. It is incumbent on the Court and its principal stakeholders to take steps to address such concerns.

Further, the Court understands that there are both ethical and access to justice issues regarding a lawyer’s use of AI when their client may not be familiar with AI and its various applications. Before using AI in a proceeding, the Court encourages counsel to consider providing traditional, human services to clients if there is reason to believe a client may not be familiar with, or may not wish to use, AI.

The following principles are intended to guide the use of AI in documents submitted to the Court: Caution: The Court urges caution when using legal references or analysis created or generated by AI, in documents submitted to the Court. When referring to jurisprudence, statutes, policies, or commentaries in documents submitted to the Court, it is crucial to use only well-recognized and reliable sources. These include official court websites, commonly referenced commercial publishers, or trusted public services such as CanLII.

There is no question that lawyers have a duty to ensure that sources ultimately relied on are trustworthy. However, there is no reason that AI tools cannot be used as the starting point for research or even to do drafts of portions of court filings as long as the content is properly verified. In particular:

  • Many generative AI tools provide references and links to sources relied on. These can be checked and verified by lawyers. While the tools are still in development, over time they will become increasingly accurate, especially when the tools come from reputable legal publishers.
  • A legal filing can contain many different parts. The Notice would appear to apply to any part, however small, and even if the part has been verified and edited by the lawyer or other author to be accurate. It is not expressly limited to the generation of entire documents or substantial parts of court documents.

The Court rightfully counsels lawyers that there should always be a “Human in the loop” stating:

Human in the loop”: To ensure accuracy and trustworthiness, it is essential to check documents and material generated by AI. The Court urges verification of any AI-created content in these documents. This kind of verification aligns with the standards generally required within the legal profession.

A problem with the Notice is that the Declaration appears to be required even where a human is in the loop and has done a proper job of verifying the AI generated content. The requirement for the Declaration in these circumstances should not be necessary. In fact, the requirement for the Declaration in these circumstances could be misleading or impede the adoption of efficient tools that bring down the cost and improve the efficiency of legal services.

Another challenge with the notice is that it applies to all court documents prepared for the purposes of litigation. This will not only catch memos of law which it clearly is designed to catch. But, it will also include other documents including witness statements, affidavits and expert reports. The Declaration would apparently be required for any portions of the document, even if verified by the author. Leaving aside this potential over breadth, it will require lawyers to diligently verify that no part of any such documents has been generated, or partially generated, using a generative AI tool and seems to put lawyers offside the rule if they do not get full disclosure from the authors.

The Court has stated that it is open to “update this guidance periodically as the Court’s understanding of AI evolves”. I respectfully submit it is time for an update before the ink on this notice dries.

This blog post was first published by Barry Sookman on his blog