The Heppner Ruling and the Fragility of AI Privilege

The meteoric rise of generative artificial intelligence (Gen AI) has exposed a systemic vulnerability in the corporate legal shield. As a "question of first impression," the decision in United States v. Heppner (2026) is the first to explicitly deny privilege to AI-generated documents. Its significance lies in the clear signal that the mere involvement of a client and a legal topic does not invoke the protections of the law.

Heppner illustrates that, in a new era of digital discovery, the efficiency of AI-assisted research offers no refuge from the rigorous requirements of legal privilege. The core tension lies in the judiciary’s "technology-neutral" stance. Courts are not carving out "AI exceptions" to established protections; rather, they are applying centuries-old principles to modern prompts. 

Case Background: United States v. Heppner

The matter arose in the Southern District of New York (SDNY) following a grand jury subpoena issued to Bradley Heppner, an executive under investigation for financial misconduct. In preparing his defense strategy, Heppner utilized Anthropic’s Claude chatbot to synthesize legal arguments and research defensive positions. Heppner acted on his own initiative, feeding the AI information he had learned from his defense counsel to generate reports which he then transmitted back to his attorneys. When the FBI executed a search warrant at Heppner’s residence, they seized electronic devices containing these memorialized exchanges.

Why Privilege Failed: The Three-Part Test

In denying the defendant’s claims, Judge Rakoff applied settled, technology-neutral principles rather than creating an AI-specific exception. The court’s analysis demonstrates that when an executive communicates with a consumer grade third-party AI platform, they have the same expectation of privacy as if speaking to a third party in the public square.

  1. The Absence of a Fiduciary Relationship. The attorney-client privilege is predicated upon a "trusting human relationship" involving a licensed professional who owes fiduciary duties to the client and is subject to professional discipline. An AI platform, regardless of its sophistication, is not an attorney. The court held that the discussion of legal issues between two non-attorneys (the user and the AI platform) is fundamentally unprotected.

  2. The Waiver of Confidentiality. Confidentiality was destroyed at the outset by the terms of service governing the platform. Anthropic’s consumer privacy policy explicitly stated that user inputs could be retained for model training and disclosed to "governmental regulatory authorities." By agreeing to these terms, Heppner surrendered any reasonable expectation of privacy, rendering the communications discoverable.

  3. The Purpose of the Communication. The court found that the communications were not made for the purpose of obtaining legal advice from a qualified source. Because Claude expressly disclaims providing legal advice or recommendations —a fact the government confirmed by prompting the AI platform itself — the user cannot claim they were seeking professional counsel from the software.

Furthermore, the work product doctrine failed to attach because the documents did not reflect the "mental processes" of an attorney. Under Second Circuit precedent, protection is reserved for materials prepared by or at the behest of counsel. Because Heppner acted of his own volition without attorney direction, the AI was not an extension of the lawyer’s mind, and the resulting research remained unprotected. 

The Enterprise Distinction and the Kovel Doctrine

There is a material legal gulf between consumer-grade chatbots and enterprise-grade AI deployments. Under the Kovel doctrine, United States v. Kovel, 296 F.2d 918 (2d Cir. 1961), privilege may extend to third-party agents (like translators or accountants) who assist counsel in rendering legal advice. While Heppner was a loss for the defendant, the ruling suggests that enterprise tools—properly structured under attorney supervision and bound by robust confidentiality agreements—may still qualify for protection.

However, a dangerous "wrinkle" exists: if a person inputs pre-existing privileged advice from their lawyer into an AI platform that retains and/or discloses consumer input, that act may constitute a waiver of the privilege over the original communication from the lawyer. This could allow the government to subpoena the attorney’s underlying notes and files that were "fed" into the AI platform.

Consumer vs. Enterprise AI Protection

Feature Consumer AI (Heppner Context) Enterprise AI (Counsel-Directed)
Data Training Inputs used to train models by default. Explicit contractual "no-training" provisions.
Confidentiality Broad disclosure rights to authorities. Signed Data Processing Agreements and strict confidentiality.
Supervision Client-led; independent initiative. Directed and controlled by legal counsel.
Work Product Basis Unprotected independent research. Attorney mental impressions.
Fiduciary Status Expressly disclaimed. Structured as a Kovel agent of counsel.

Practical Guidance: Dos and Don’ts for the AI Era

To mitigate the risks of "discovery exposure," organizations must treat AI integration with the same rigor as any other high-stakes legal workflow.

WHAT CLIENTS SHOULD DO:

  • Utilize Enterprise Accounts: Exclusively use enterprise-tier accounts governed by signed Data Processing Agreements with "no-training" covenants.

  • Ensure Attorney Direction: All AI-assisted research must be explicitly directed by counsel to support work product claims and reflect the attorney's mental impressions.

  • Implement Upjohn-Style Notices: Deploy internal notices stating the AI is for company business, that the company (not the individual) holds the privilege, and that employees must not use it for personal matters.

  • Strict Retention Policies: Implement automated retention schedules that discard AI chats after a short period (e.g., 21 days) unless a litigation hold is in place, reducing the "discovery surface area."

WHAT CLIENTS MUST AVOID:

  • Consumer-Grade AI Platforms: Forbid the use of free or "Pro" consumer accounts for any sensitive matter. These tools are the primary target for government discovery.

  • Inputting Privileged Information: Never "test" or "analyze" pre-existing attorney advice in an AI tool unless that tool has been vetted for confidentiality and the process is directed by counsel.

  • The "Privilege Gap": Be aware that separately represented executives and employees cannot use company-provisioned AI platforms for their personal defense; the company holds the privilege, leaving the executive’s personal defense documents exposed.

  • Independent Research: Do not allow non-lawyers to perform unsupervised legal analysis using AI, as this creates a permanent, discoverable record of the company's "mental impressions" without the shield of privilege.

The Bottom Line

The Heppner ruling, although not the last ruling of its kind, is a definitive warning that the legal landscape has shifted. While Gen AI is not fundamentally incompatible with privilege, its protection depends entirely on how the use is structured, supervised, and documented. Clients who treat AI as a private confidant and do not apply rigor to its use risk not only the discovery of their research but the total waiver of their legal privilege.


____________________

¹ Upjohn Co. v. United States, 449 U.S. 383 (1981).

Next
Next

SEC publishes order with exemptions from Section 16(a) reporting with respect to certain Foreign Private Issuers