This website uses cookies

Read our Privacy policy and Terms of use for more information.

This issue may contain affiliate links and is general information, not legal, tax, or professional advice.

The short version

  • A federal judge just ruled that a defendant’s self-directed use of consumer Claude was not privileged, even though he later shared the AI outputs with his lawyers.

  • ABA Formal Opinion 512 does not ban generative AI. It applies existing duties of competence, confidentiality, supervision, candor, and reasonable fees to AI use.

  • The fix is a two-layer AI stack: general-purpose AI for non-confidential work, and a domain-specific or controlled environment for client-sensitive work.

A federal judge just showed how easily unsupervised use of public AI can put attorney-client privilege at risk.

In United States v. Heppner, 25-cr-00503-JSR, a defendant named Bradley Heppner used the consumer version of Claude, Anthropic’s AI chatbot, to work through factual and legal issues related to the government’s investigation. Judge Jed Rakoff accepted that Heppner had incorporated information conveyed by counsel, intended to share the resulting AI-generated documents with counsel, and did in fact share those documents with counsel, but he still rejected both the attorney-client privilege and work-product claims.

On February 10, 2026, Judge Rakoff rejected Heppner’s claims from the bench. A written opinion followed on February 17.

If you run a law firm, or any firm that handles confidential client information, and your people are using public AI tools for client work, this case matters.

I’m Bob Gonsalves, and this is the first issue of The Small-Firm AI Playbook.

For more than 30 years, I’ve made a living doing the kind of work small professional firms depend on every day: research, writing, briefing, positioning, client education, competitive analysis, and turning messy information into something useful. Much of that work is valuable. Much of it is also unbillable.

AI changed that for me. Now I save tens of hours weekly that used to disappear into searching, summarizing, comparing, outlining, drafting, revising, and repurposing.

That is the promise I care about for small professional firms. Not hype. Not magic. Not replacing judgment. Just fewer wasted hours, better first drafts, faster research, clearer decisions, and more time for the work clients actually pay you to do.

But there is a line.

The same tools that are useful for public research, internal planning, marketing drafts, and general synthesis can create serious problems when confidential client information goes into the wrong environment.

That is why this first issue starts with Heppner.

Every Tuesday, I’ll break down one real workflow that eats your time, show where AI can save hours and dollars, and explain which tool decisions are worth your attention.

Plain language. No hype. One workflow. One decision. The risk. The math.

Can Public AI Use Waive Attorney-Client Privilege? What U.S. v. Heppner Held

Yes, it can. Heppner shows that self-directed use of a public consumer AI tool, outside counsel’s direction and without verified confidentiality protections, does not qualify for privilege or work-product protection.

Debevoise & Plimpton’s analysis breaks the ruling into three practical points.[1]

First, Claude was not acting as a lawyer.

Attorney-client privilege protects communications between a client and counsel made for the purpose of legal advice. Judge Rakoff reasoned that because Claude is not an attorney, communications between Heppner and Claude could not satisfy that basic requirement. He also noted that recognized privileges require a “trusting human relationship” with a licensed professional who owes fiduciary duties and is subject to discipline.

Second, Heppner was not acting at counsel’s direction.

Even if Heppner used Claude with the intention of later speaking to his lawyers, Rakoff found that was not enough. Heppner acted on his own, not under counsel’s instruction.

Third, sharing the outputs with lawyers later did not fix the problem.

The court rejected both the attorney-client privilege and work-product claims even though Heppner later shared the AI-generated documents with counsel.

The court did not say all AI use destroys privilege. Rakoff specifically noted that if counsel had directed Heppner to use Claude, and if confidentiality had been present, Claude “might arguably” have functioned like a professional agent under the Kovel doctrine.

That distinction matters.

The issue is not AI itself. The issue is unsupervised use of a public model, outside counsel direction, without verified confidentiality protection.

When confidential client information goes into a public consumer AI tool outside counsel direction and without verified confidentiality protections, you may be creating documents a court will not treat as privileged.

If the applicable terms allow retention, review, or disclosure, the confidentiality risk is live. In the wrong matter, that can become a malpractice, sanctions, or client-trust problem.

Does ABA Formal Opinion 512 Ban Lawyers From Using AI?

No. Opinion 512 does not ban generative AI. It tells lawyers to apply existing duties when they use it.

I use AI every day. It is now part of how I research, draft, compare vendors, monitor developments, outline decisions, and turn one piece of work into multiple useful assets. For the right work, it is one of the best small-firm productivity tools I have ever used.

But productivity does not erase professional duty.

That is the part many small firms are missing. The ethical rules for lawyers using AI already exist. The enforcement is real.

On July 29, 2024, the ABA Standing Committee on Ethics and Professional Responsibility issued Formal Opinion 512, “Generative Artificial Intelligence Tools,” to guide lawyers on their ethical obligations. The opinion directs lawyers who use generative AI to consider their existing duties, including competence, confidentiality, communication with clients, supervision, candor to tribunals, and reasonable fees.

Three duties matter especially here.

Competence, Rule 1.1.

You have to understand the tools you use well enough to evaluate their risks. “I didn’t know the AI made that up” will not help you.

Law360 reported in April 2026 that Judge Kai N. Scott ordered Pennsylvania attorney Raja G. Rajan to pay a $5,000 penalty and complete additional AI and legal ethics CLE after finding that he used AI-created court documents with made-up citations in Bunce v. Visual Technology Innovations. According to Law360, this was Rajan’s second AI-citation sanction in that case. The prior sanction was $2,500.

If you use the tool, you own the output.

Confidentiality, Rule 1.6.

You cannot expose client information through tools that lack adequate protections. That is why Heppner matters. Not because AI is banned, but because confidentiality has to be verified, not assumed.

Supervision, Rule 5.3.

If you have managerial authority, you are responsible for supervising the use of nonlawyer assistance. Formal Opinion 512 identifies supervisory responsibilities as one of the duties lawyers must consider when using generative AI.

Why Heppner Matters Beyond Law Firms

I am leading with law because that is where the sanctions and case law are most visible right now. But the underlying problem crosses every professional services vertical.

CPA firms have confidentiality obligations under professional and tax rules. Financial advisors operate under their own regulatory regime. Therapists have HIPAA. Marketing and PR firms handle proprietary strategy, embargoed product launches, and NDA-protected material all the time.

The question is the same in every one of those fields.

Which workflows can safely run through a consumer AI tool?

Which ones need something built for confidentiality, auditability, access control, and tighter contractual protection?

That is the real takeaway from Heppner.

Not: stop using AI.

Stop assuming public AI is a safe container for sensitive professional work.

What Is the Two-Layer AI Stack for Small Firms?

A two-layer AI stack uses general-purpose AI for non-confidential work and a domain-specific or controlled AI environment for client-sensitive work. The job is matching the tool to the risk.

Here’s the framework I use in my own work, and the one I’ll return to throughout this newsletter.

The question is not whether AI is useful. It is. The question is what kind of work belongs in what kind of tool.

Dimension

Layer 1: General-Purpose AI

Layer 2: Domain-Specific or Controlled AI

Examples

Perplexity, ChatGPT, Claude

CoCounsel, Westlaw AI, Spellbook, Clio Duo

Best for

Public research, marketing drafts, internal checklists, hypothetical outlines

Privileged work, client records, regulated data

Public pricing range

~$20–$200 per user / month

~$49–$1,000+ per user / month

Confidentiality protections

Limited; depends on consumer or business terms

Contractual, auditable, role-based

Suitable for client data?

Generally no

Yes, with proper review and consent

Layer 1: General-purpose AI

Tools like Perplexity, ChatGPT, and Claude are powerful, flexible, and often the right tool when confidential client information is not involved. That covers a lot of ground: researching general legal standards, reading public case law, tracking regulatory developments, drafting marketing copy, building internal checklists, and outlining documents with hypothetical facts instead of real client facts.

I run Perplexity Max as the base layer of my own workflow. For non-confidential research and synthesis, it replaces hours of searching, tab-hopping, reading, copying, pasting, and note consolidation.

Comet, Perplexity’s browser tool, handles the kind of monitoring and repetitive research that used to eat staff time. It can compare vendors, pull data into usable formats, track competitors, and run multi-step research while I do something else.

That layer is a big part of how my firm saves thousands a month.

Layer 2: Domain-specific or controlled AI

The moment confidential client information enters the picture, the tool decision changes. Names, case facts, financial records, medical information, privileged communications, strategy memos, draft advice, client records, internal matter notes — that work belongs in a tool environment built for it.

These tools cost more.

That is the point.

Public reporting on CoCounsel pricing varies from roughly $225 to $500 per user per month for the core product, with bundled Westlaw and Practical Law configurations reported well above $1,000 per seat. Thomson Reuters does not publish list pricing.

Clio publishes its pricing at $49, $89, $119, and $149 per user per month by plan, before add-ons like Grow, Draft, or Manage AI. Spellbook offers a 7-day trial and routes firms into seat-based pricing through a sales process.

You are paying for three things general-purpose AI does not give you.

Proprietary data. Platforms like Westlaw and LexisNexis sit on paid databases that open-web tools cannot fully reach.

System-of-record integration. These tools tie into billing, documents, matter management, workflows, and permissions inside your firm. Not just a chat window.

Tighter controls. Auditability, access controls, contractual data protections, and confidentiality terms built for regulated work. The specifics vary by vendor, so read the terms before you sign.

How Much Time and Money Can a Small Firm Save With AI?

A solo attorney billing $300 an hour who recovers 5 hours a week across both layers is looking at roughly $6,500 a month in recovered capacity.

A 5-attorney firm recovering 4 hours per attorney per week is looking at roughly $26,000.

That is arithmetic, not marketing.

Now compare that with the cost of getting it wrong.

Rajan’s two AI-citation sanctions in Bunce v. Visual Technology Innovations total $7,500. That does not count reputational damage, remediation time, or client trust you do not get back.[5][4]

In some workflows, the right domain-specific tools are not just productivity software.

They are risk control.

How Should a Small Firm Audit AI Use This Week?

Start with a 30-minute audit, then sort each use by risk, then write a one-page policy.

Every issue ends with something you can do before your next client meeting.

This week, your job is to draw the line before the next prompt gets written.

If your firm already uses AI, start with a 30-minute usage audit. Look at every AI tool your firm touched in the last 30 days. That includes ChatGPT, Claude, Perplexity, Copilot, Gemini, legal research tools, drafting tools, transcription tools, meeting-note tools, browser agents, plug-ins, and anything built into your practice-management software.

For each tool, answer five questions.

Who used it?

What task was it used for?

What information went in?

Did any client, matter, financial, medical, legal, or proprietary information go in?

Is that tool approved for that kind of work?

If your firm is not officially using AI, do not skip this step.

Ask the same question a different way: where could AI already be entering the workflow without anyone calling it an AI project?

Check Word, Outlook, Google Workspace, Zoom, Teams, Adobe, your phone’s transcription tools, intake software, research platforms, CRM systems, bookkeeping tools, and practice-management software. Many firms are closer to AI use than they think because AI features are being added to tools they already pay for.

If you still find no AI use, that is good news. It means you can set the rules before bad habits form.

Sort future AI use into three buckets.

🟢 Green. Public or non-confidential work. General research, marketing drafts, internal checklists, public case law, vendor comparisons, social posts, and document outlines using hypothetical facts.

🟡 Yellow. Internal firm work that is not client-confidential but still should not be casually exposed. Strategy notes, pricing, hiring, internal financial planning, unpublished marketing plans, and competitive research.

🔴 Red. Client-sensitive work. Names, case facts, tax records, financial statements, medical information, therapy notes, privileged communications, settlement strategy, draft advice, NDA material, and anything subject to a protective order.

Green work can usually live in your approved general-purpose AI layer.

Yellow work needs judgment. Some of it may be fine in a paid tool with the right settings and terms. Some of it belongs in a controlled environment.

Red work does not go into a public AI tool unless the platform has been approved for that use, the terms have been reviewed, the data protections are adequate, and any required client consent has been handled.

That last point matters. ABA Formal Opinion 512 says lawyers using generative AI must still satisfy their existing duties, including competence, confidentiality, communication with clients, supervision, candor, and reasonable fees. The opinion also tells lawyers to understand the capabilities and limits of the tools they use, avoid uncritical reliance on AI-generated content, and preserve professional judgment.

Then write a one-page firm AI policy.

It only needs five sections.

  1. Approved tools for public or non-confidential work.

  2. Approved tools for confidential or client-sensitive work.

  3. Information that may never be entered into public AI tools.

  4. Required human review before AI-assisted work leaves the firm.

  5. Who approves new AI tools before anyone uses them.

Do not make this complicated. A policy your team follows is better than a perfect policy nobody reads.

Then pick your two layers.

One approved general-purpose AI tool for non-confidential work.

One domain-specific or controlled environment for sensitive work.

Match the tool to the risk, not the convenience.

Do not stop using AI.

And if you have not started, do not wait for a crisis to make your first AI decision.

The goal is not to avoid AI.

The goal is to use it with discipline.

Frequently Asked Questions

Does using ChatGPT or Claude waive attorney-client privilege?

It can. In United States v. Heppner, a federal judge held that a defendant’s self-directed use of consumer Claude was not privileged, even though the user had received information from counsel and later shared the AI outputs with counsel.

Did ABA Formal Opinion 512 ban generative AI for lawyers?

No. Formal Opinion 512, issued July 29, 2024, did not ban generative AI. It applied existing duties of competence, confidentiality, communication, candor to tribunals, supervision, and reasonable fees to lawyers’ AI use.

What happens if a lawyer files AI-generated fake citations?

Courts are sanctioning lawyers who file AI-hallucinated citations. Pennsylvania attorney Raja G. Rajan was sanctioned $5,000 in April 2026 after a second incident of AI-generated fake citations in Bunce v. Visual Technology Innovations, on top of a prior $2,500 sanction.

What is the two-layer AI stack?

A two-layer AI stack uses general-purpose AI for non-confidential work and a domain-specific or controlled AI environment for client-sensitive work. The job is matching the tool to the risk, not the convenience.

Should small CPA, therapy, financial, or PR firms care about Heppner?

Yes. The privilege ruling is law-specific, but the lesson is broader. Public AI tools are not automatically a safe container for confidential or regulated information, regardless of the profession.

What’s coming

Next Tuesday, I’m staying in the legal segment.

I’m not going to do expensive side-by-side product testing. Most small firms do not have the budget to buy subscriptions just to compare them, and neither do I. This newsletter will focus on what is more useful: decision frameworks, workflow breakdowns, risk boundaries, and buying logic.

Next week’s question: how should a small law firm decide which research, drafting, and intake tasks belong in a general-purpose AI tool and which ones need a domain-specific legal platform?

Two weeks from now, I’ll move into accounting. The question there is just as practical: which AI tasks are safe and useful during close season, and which ones create more risk than they are worth?

That is the rhythm.

One firm type. One workflow problem. One clear decision. The risk. The math. Every Tuesday.

If you run a small professional services firm and you are trying to figure out AI without wading through hype, this is the newsletter. I use these tools myself. I know how much time they can save. I also know that the wrong shortcut can create a bigger problem than the one you were trying to solve.

See you next Tuesday.

— Bob

About the author. Bob Gonsalves publishes The Small-Firm AI Playbook for ICBM Media, Inc. He has spent more than 30 years in research, writing, and content strategy for small professional firms and now writes weekly about how 1-to-20-person firms can use AI to recover hours and reduce risk.

The Small-Firm AI Playbook is published every Tuesday by ICBM Media, Inc. for owners and partners of 1-to-20-person professional services firms. Each issue breaks down one AI workflow, one tool decision, the risk, and the math behind using AI to save time without putting client trust at risk

Disclosures

Affiliate links: Some issues may include affiliate links. If you click and purchase, ICBM Media may earn a commission at no additional cost to you. We only feature tools we have evaluated or that small-firm operators we trust are actively using. Commissions never determine our recommendations or coverage.

Not a law firm: The Small Firm AI Playbook is published by ICBM Media, Inc. for informational and educational purposes only. We are not a law firm, accounting firm, or licensed advisor, and nothing here is legal, tax, or professional advice. Reading this newsletter does not create an attorney-client, accountant-client, or consultant-client relationship. Always consult a qualified professional licensed in your jurisdiction before acting on anything you read here. This newsletter is not attorney advertising.

AI use: We use large language models (including ChatGPT, Claude, and Perplexity) to assist with research, graphic design, search engine optimization, and more. Each issue is written, edited and verified by a human.

Keep Reading