Your organization exists to govern itself on behalf of the community it serves. Three weeks ago, a federal court confirmed that consumer AI tools put that self-governance at risk in ways most leaders haven’t fully processed yet.
This article is not legal advice. It’s a leadership conversation about what a court opinion reveals about the environment your organization operates in, what it means for the people you lead, and what options are emerging for organizations ready to respond. It ends with two questions about how your organization is navigating this shift.
What the Court Said
On February 10, 2026, Judge Jed Rakoff of the Southern District of New York ruled that documents a defendant created using a consumer AI tool were neither protected by attorney-client privilege nor shielded as work product. A week later, he issued a written opinion calling it a matter of “first impression” nationwide, the first written federal court opinion directly addressing AI-generated materials and legal privilege. Bradley Heppner, a financial services executive facing federal fraud charges, had used the consumer version of Anthropic’s Claude to analyze his legal situation. He generated 31 documents and shared them with his lawyers. When the FBI seized those documents, his attorneys claimed privilege. The court disagreed on every count. As Venable LLP’s detailed analysis explains, the court applied traditional, technology-neutral privilege principles rather than creating new AI-specific law.
This isn’t a ruling about misusing AI. It’s a ruling about how the tools actually work versus how they feel when you’re using them.
The Gap Nobody Designed
Every consumer AI platform creates the same experience: a private conversation with an intelligent partner. You type a question. You get a thoughtful answer. The interface feels like a confidential exchange.
The reality is different. In August 2025, Anthropic updated its consumer terms to use conversations for model training by default on its Free, Pro, and Max plans, with data retained for up to five years. Users can opt out of training, but opting out doesn’t eliminate the platform’s right to disclose data in response to legal process. Only commercial-tier agreements, covering Enterprise, Government, Education, and API access, exclude user data from training and operate under separate contractual terms with stronger confidentiality protections. OpenAI draws a similar line between consumer and enterprise customers.
The court’s written opinion made the structural point explicit: sharing information with a consumer AI tool is sharing it with a third party whose policies permit further disclosure.
What’s Actually Happening Inside Organizations
A 2025 global study by KPMG and the University of Melbourne surveyed over 48,000 people across 47 countries and found that 57% of employees admitted to hiding their AI use from managers and colleagues. McKinsey’s 2025 research found that C-suite leaders estimate only 4% of employees use generative AI for at least 30% of their daily work; employees self-report the number at three times that rate.
But AI adoption in organizations isn’t limited to staff using ChatGPT to draft emails. Some organizations are already operating department-level AI systems: camera networks with integrated analytics, case management platforms with AI-driven decision support, predictive tools embedded in infrastructure. The governance challenge looks different depending on whether your organization is managing consumer tool adoption, enterprise AI systems, or both, and many leaders aren’t sure which reality they’re operating in.
When organizational policy on AI is unclear, people don’t stop using it. They stop talking about it. The employees drafting reports, analyzing data, or working through problems with AI tools, without organizational awareness, create a category of risk that most leaders haven’t had to manage before. In organizations subject to public records laws, that risk carries particular weight. Anyone who has watched a routine email become a public records request understands how thin the line is between internal communication and public disclosure. Consumer AI conversations sit on the other side of that line entirely.
The Deeper Question: Who Controls Your Information?
For leaders in local government, the privacy concern is real, but it points to something more fundamental. Consumer AI platforms operate on terms set by the platform provider. Those terms can change. Data practices serve the provider’s business model. Your organization never negotiated those terms, and in most cases your employees agreed to them individually without organizational review.
That’s not just a privacy question. It’s a dependency question. And for organizations built on self-governance, the dependency is growing.
The Heppner ruling didn’t create this tension. It made it visible. When your organization’s information flows through a consumer platform, the platform’s policies determine what happens to it. Not yours.
The regulatory landscape reflects the same recognition. In January 2025, the HHS Office for Civil Rights proposed an update to the HIPAA Security Rule, the first major revision in over a decade, which would require covered entities to include AI tools in their risk analysis. At the state level, 19 states have now passed legislation specifically addressing public sector AI use. The number of AI-related bills introduced across state legislatures doubled from roughly 600 in 2024 to 1,200 in 2025.
What Leaders Are Starting to Consider
Attorneys will handle the legal implications of the Heppner ruling. Many government attorneys have been flagging these structural concerns for some time, and the organizations that listened now have a federal court opinion supporting the caution their counsel advised. Leaders who insisted on clear frameworks before moving forward had their judgment validated.
But the question this ruling surfaces isn’t primarily legal. It’s about what your organization controls and what it doesn’t.
Organizations are beginning to respond in different ways. Some are moving to enterprise-tier AI agreements that provide contractual confidentiality protections consumer tools cannot. Others are developing organizational AI governance frameworks that create structure for responsible use across departments. And some are exploring locally hosted AI infrastructure, systems that run on hardware the organization controls, where no third-party privacy policy governs what happens to organizational information. That path removes the dependency question entirely.
No single approach is right for every organization. The right path depends on what AI adoption actually looks like in your organization today, what you need to protect, and the capacity you have to act. But the range of options is wider than most leaders realize, and the conversation is worth having before the next ruling, regulation, or records request forces it.
The organizations that will navigate this well are the ones where leaders have built conditions for honest conversation about what’s actually happening, and where the systems that reinforce those conditions are built to sustain as the landscape continues to shift.
Two Questions About Your Organization and AI
Your answers help us understand what organizations like yours are navigating. Two questions, completely anonymous unless you choose to share your email.
Related Reading
- AI Governance for Local Government: A framework for the governance decisions AI keeps requiring
- The AI Anxiety Gap: What the Research Shows: The perception gap between leaders and employees on AI adoption
- The Question AI Can’t Answer: AI processes information. It can’t tell you what matters.
About Rob Duncan
Rob Duncan is the founder of Imagine That Performance, where he works alongside city managers, county administrators, and government leaders through Think Tank peer advisory groups, workshops, and executive coaching. His work focuses on helping organizations build the conditions where people can see clearly, decide deliberately, and sustain what matters most.
What Does Your AI Governance Gap Look Like?
The Heppner ruling answered one question about AI and privilege. The governance questions underneath it, what data is leaving your network, who’s using AI without visibility, whether you could answer a FOIA request about AI usage today, require a different kind of conversation. See how we’re helping government leaders work through AI governance.
Rob Duncan is not an attorney and this article does not constitute legal advice. Organizations should consult qualified legal counsel regarding the implications of the Heppner ruling for their specific circumstances.
