Key Notes:

  • Communications with public AI tools may not be privileged.
  • AI platform terms matter and should be carefully reviewed.
  • Attorney direction may be critical to protect confidentiality.

In a case of first impression and with significant implications for anyone who uses public generative AI tools in connection with legal matters, Judge Rakoff of the Southern District of New York issued an opinion on February 17, 2026, in United States v. Bradley Heppner, No. 25 Cr. 503 (JSR) holding that the defendant’s communications with a publicly available AI platform were not protected by attorney-client privilege or the work product doctrine. This ruling should serve as a wake-up call for anyone who’s ever input information into a public AI tool: sensitive information shared with public AI platforms is not confidential and privilege claims over AI-assisted work may fail.

Background

In Heppner, a federal securities fraud case, the defendant had used Claude by Anthropic, a third-party public generative AI tool, to input prompts about the government’s investigation, his potential legal exposure and defense strategy and arguments he anticipated after it became clear he was the target of an investigation. When FBI agents executed a search warrant at Heppner's residence in connection with his November 2025 arrest, they seized numerous documents and electronic devices, including 31 documents memorializing communications between Heppner and the generative AI tool.

Heppner subsequently asserted privilege over these AI-generated documents, arguing that: (1) he had inputted information learned from counsel into Claude; (2) he created the documents for the purpose of speaking with counsel to obtain legal advice; and (3) he subsequently shared the contents with counsel. However, defense counsel conceded that they did not direct Heppner to use Claude.

The Court’s Decision

Judge Rakoff noted that the attorney-client privilege applies to communications: (1) between a client and attorney; (2) that are intended to be and actually kept confidential; and (3) made for the purpose of obtaining or providing legal advice. The Court found that Heppner's AI communications failed at least two, if not all three, of the elements necessary to establish that the communications were protected by attorney-client privilege:

Claude is not an attorney. In the absence of an attorney-client relationship, discussion of legal issues between two non-attorneys is not protected. For a communication to be treated as privileged, the required “trusting human relationship” with a licensed professional who owes fiduciary duties and is subject to discipline must exist - but cannot exist between a user and an AI tool.

The communications were not confidential and there was no reasonable expectation of privacy. Anthropic's privacy policy, to which users consent, provides that Anthropic collects data on users' inputs and Claude's outputs, uses such data to train Claude, and reserves the right to disclose such data to third parties, including governmental regulatory authorities even without a subpoena. This policy clearly puts users on notice that communications with Claude are not confidential. And even if Heppner’s inputs were originally privileged, that was waived when he shared it with Claude just as if he had shared it with any other third party.

Communications were not made to obtain legal advice. Although Heppner maintained that he communicated with Claude for the “express purpose of talking to counsel,” Heppner did not do so at the suggestion or direction of counsel. If counsel had directed Heppner to use Claude, it is arguable that Claude “functioned in a manner akin to a highly trained professional” acting as the lawyer’s agent with privilege protections in place. But, because Heppner acted on his own volition, the analysis is whether Heppner intended to get advice from Claude, not what he gave to his counsel. Moreover, when prompted to give legal advice, Claude itself responds that it is "not a lawyer" and recommends that users "consult with a qualified attorney.”

Word product doctrine does not apply either. Because Heppner used Claude of his own accord and the communications did not disclose counsel’s strategy, the work product doctrine and its qualified protection for materials prepared by or at the behest of counsel in anticipation of litigation or for trial did not apply.

Practical Applications

Exercise caution when using public AI tools as your communications may be discoverable, and do not assume AI-assisted legal preparation is privileged. An understanding of the AI tool and the terms of use is essential - policies that permit data collection, training, and third-party disclosure may defeat any claim of confidentiality. Without direction from counsel, client communications with AI platforms are unlikely to qualify for work product protection.