This website uses cookies

Read our Privacy policy and Terms of use for more information.

Last updated: May 2026
Authority base: ABA Model Rules of Professional Conduct and ABA Formal Opinion 512, with examples drawn from how small firms in states like Texas have started putting those duties into practice.
Methodology: This started as an AI-assisted research draft. A human edited it, fact-checked it, and shaped it for small-firm reality.

Scope & limits. This is a framework for US small firms — roughly 2 to 15 lawyers — using AI in everyday practice. It's informational, not legal advice. Ethics rules and enforcement vary by state, and AI tools change every quarter, so run the final language by your own counsel and your malpractice carrier before you sign it.

If you run a small firm, you already know two things at once. AI is going to change how you do legal work. And the ABA has made it clear that "we'll figure it out later" is not an answer the bar will accept — Formal Opinion 512 spelled out exactly which duties already apply the moment you let a model touch a client matter.

Most AI policy templates floating around right now are written for AmLaw 200 firms with a GC, a CISO, and a vendor management team. That is not your firm. You need something a solo or a small partnership can actually sign on a Tuesday and live by on Wednesday.

That's what this is.

Opinion 512 in plain English

Opinion 512 didn't invent new rules. It reminded you the old ones still apply, even with a chatbot in the middle. You still have to be competent, protect confidences, supervise your people, tell the truth to courts, charge reasonable fees, and communicate clearly. AI doesn't change any of that. It just gives you new ways to blow those duties if you paste a client file into the wrong tool and hit enter.

Here's what each duty looks like once AI is in the room:

  • Competence. You need a working grasp of any AI tool you use on client work. Not a PhD — but enough to know where it lies, where it leaks, and where it breaks. If you can't explain in one sentence what the tool does and how it can fail, you shouldn't be using it on a file yet.

  • Confidentiality. If a tool trains on your inputs or stores them somewhere you can't account for, client information doesn't go in. Period. Free consumer chatbots are not your friend here.

  • Supervision. If anyone in your shop — associate, of counsel, paralegal, virtual assistant — is touching client work with AI, you own that work. You decide what's allowed, how it's used, and how the output gets checked before it leaves the building.

  • Candor to the tribunal. Every citation, every quote, every statutory reference gets verified in a real source before it goes near a brief. No one gets to say "the AI did it."

  • Reasonable fees. You can bill for the time you actually spent, including supervising and revising AI work. You cannot bill for time you didn't spend because the model did it in four seconds.

  • Communication. If AI is going to materially shape how you handle a matter or expose confidential information to a third party, the client hears about it up front — not in a footnote on the bill.

The two-layer stack

The easiest way to stay out of trouble is to stop thinking about "AI" as one thing. Think in two layers.

Layer one: Approved Tools. A short, named list of tools the firm has vetted — data practices, training behavior, security, billing implications. These are the only tools that ever touch Client Information or firm work product.

Layer two: Everything else. Your people can experiment with general consumer AI tools for non-client tasks — learning, brainstorming a blog post, drafting an internal memo with no client facts in it. The hard line is simple: no client information, ever, in a tool that isn't on the approved list.

That's it. That one rule prevents most of the nightmares.

The Policy Template

Copy this into Word, fill in the brackets, run it past your malpractice carrier, and adopt it.

1. Purpose

This Policy sets the rules for how [Firm Name] uses generative AI tools in client work and firm operations. It keeps the firm aligned with the ABA Model Rules and the duties spelled out in ABA Formal Opinion 512.

2. Scope

This Policy applies to every partner, associate, of counsel, contract lawyer, paralegal, assistant, intern, and vendor who does work for the Firm or its clients.

3. Definitions

  • GenAI Tool — any system that generates text, images, audio, code, or analysis in response to a prompt. If it writes, summarizes, or "analyzes" when you type, it counts.

  • Approved Tool — a GenAI Tool the Firm has formally vetted and added to the Approved Tools List.

  • Self-Learning Tool — any GenAI Tool that trains on user inputs by default and cannot be configured to stop.

  • Client Information — anything relating to the representation of a client, whether or not it's labeled confidential. If in doubt, treat it as Client Information.

4. Core Principles

  1. Lawyers practice law here. Not AI. If your name is on the signature block, you own every word above it.

  2. Confidentiality first. No Client Information enters a Self-Learning Tool. Ever.

  3. Verify before you rely. Every fact, citation, quote, and statute produced or touched by a GenAI Tool gets checked against an authoritative source before it goes to a client or a court.

  4. Approved Tools only. Client Information and firm work product only flow through tools on the Approved List.

  5. Fair billing. Clients pay for time actually worked. AI speed isn't billed. Learning curve isn't billed.

  6. Honest disclosure. When a client asks how AI is used in their matter, the answer is straight. When AI materially affects the basis or reasonableness of a fee, it shows up in the engagement letter.

5. Permitted Uses (with Approved Tools)

Approved Tools may be used for:

  • Legal research, including initial issue spotting, case-law summarization, and getting oriented in an unfamiliar area — followed by a human checking every citation in a real database.

  • First drafts of internal memos, transactional documents, and client letters — followed by a lawyer who actually reads, edits, and owns the final version.

  • Document review, summarization, and timelines — followed by spot-checks against the source documents.

  • Transcription and meeting summaries — followed by a comparison against the audio or official transcript before anything is relied on.

  • Administrative work: marketing copy, internal training, scheduling logic, knowledge management.

6. Prohibited Uses

You may not:

  • Paste Client Information into any tool that isn't on the Approved List.

  • Use any Self-Learning Tool for any task involving Client Information or firm work product.

  • Submit AI-generated citations, quotations, or statutory references to a tribunal or a client without verifying them in a primary source yourself.

  • Let AI make a final, unreviewed decision on legal strategy, settlement authority, or client communications.

  • Use AI to generate or alter evidence, fabricate facts, or impersonate any person.

  • Use AI to evaluate or screen prospective clients in a way that introduces bias the firm wouldn't accept from a human.

7. Approved Tools List

The Approved Tools List is maintained by [AI Lead / Managing Partner] and reviewed at least every six months. A tool gets added only after someone has checked:

  • What the vendor does with inputs and whether training can be turned off.

  • Where the data sits and who can access it.

  • Whether the contract gives the firm reasonable confidentiality and security protections.

  • What the tool actually costs to use, including any per-seat or per-matter charges that might flow through to clients.

8. Supervision

Every matter has a Responsible Lawyer. The Responsible Lawyer owns any AI-assisted work product on that matter. Non-lawyer staff may use Approved Tools, but a lawyer reviews any output before it touches a client, a filing, or a third party.

9. Confidentiality and Security

Client Information goes only into Approved Tools, configured the way the firm requires. Training, history retention, and data-sharing settings are turned off or scoped down by default. If a tool's terms change in a way that affects confidentiality, it comes off the Approved List until it's re-vetted.

10. Verification

Before any AI-assisted work product leaves the firm:

  • Every legal citation is opened and read in a primary source.

  • Every quoted passage is matched against the original.

  • Every factual claim about the record is checked against the record.

  • Every statutory reference is confirmed in the current version of the statute.

"The model said so" is not verification.

11. Candor to the Tribunal

If a brief, motion, or filing was meaningfully drafted with AI assistance and your jurisdiction or judge requires disclosure, you disclose. When in doubt about a specific judge's standing order, check before filing, not after.

12. Billing and Fees

The firm bills for lawyer and staff time actually spent, including time spent prompting, supervising, and revising AI-assisted work — as long as the total fee is reasonable for the work performed and the result obtained.

The firm does not:

  • Bill for time the lawyer didn't spend.

  • Charge clients for the general overhead of firm AI tools.

  • Pass through specialized AI tool costs without saying so up front in the engagement letter or a written amendment.

If using AI changes the basis or reasonableness of a fee on a matter, the client hears about it before the work is done.

Boilerplate consent in your engagement letter is not enough on its own. If AI is going to be used in a way that materially shapes how a matter is handled, or in a way that exposes Client Information to a third-party provider, that's a real conversation with the client, documented in writing.

For example, a firm operating under State Bar of Texas guidance might describe its AI use in the engagement letter, name the categories of tools involved, and confirm the client's consent in writing — a pattern that translates cleanly to most other states.

14. Training

Every person covered by this Policy completes initial AI training before using any GenAI Tool on firm work, and a short refresher annually. Training covers the six duties, the Approved Tools List, confidentiality settings, verification, and the prohibited uses above.

15. Incidents

If you suspect Client Information went into a non-Approved Tool, or that an AI-assisted error reached a client or a tribunal, you report it to [AI Lead / Managing Partner] the same day. No blame for raising it. Real consequences for hiding it.

16. Governance and Review

This Policy is reviewed at least annually, and any time:

  • A new ABA opinion, state bar opinion, or court order materially changes the landscape.

  • The firm adopts a meaningfully new category of AI tool.

  • An incident reveals a gap the current Policy doesn't address.

Signed:
[Managing Partner]
[AI Lead]
Effective date:

Five-day rollout

You don't need a six-month implementation project. You need a week.

  • Day 1 — Decide. Managing partner and one other lawyer read this Policy, mark it up for your firm, and pick your AI Lead.

  • Day 2 — Pick your tools. Build the first Approved Tools List. Two or three tools is plenty to start.

  • Day 3 — Configure. Turn off training. Turn off history where appropriate. Lock down sharing settings. Write down what you did.

  • Day 4 — Train. A 45-minute all-hands. Walk through the six duties, the two-layer stack, the Approved List, and the pr

Keep Reading