Your AI Drafts Like a First-Year Associate — RPC 5.3 Says You're Still Responsible for Every Word
6 min readApril 19, 2026

Your AI Drafts Like a First-Year Associate — RPC 5.3 Says You're Still Responsible for Every Word

RPC 5.3AI SupervisionNew Jersey Legal Ethics

There's a fiction embedded in how most solo attorneys use AI drafting tools: that because the software is sophisticated, it doesn't require supervision. That it's more like a calculator than a contractor. That when a motion comes out clean and confident, it probably is clean and confident.

New Jersey's RPC 5.3 says otherwise — and the rule doesn't care how impressive your vendor's demo looked.

What RPC 5.3 Actually Requires of You

RPC 5.3 governs the supervision of non-lawyer assistants. Under the rule, a New Jersey attorney must make "reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance" that non-lawyer work is compatible with the lawyer's professional obligations. If a non-lawyer's conduct involves a violation, the supervising attorney is responsible if they ordered it, ratified it, or failed to take "reasonable remedial action" once they knew about it.

Courts and ethics bodies have increasingly made clear that AI tools operating in a legal workflow occupy this same functional space. They produce legal work product. They operate under your license. And unlike a paralegal, you cannot coach them, discipline them, or read their body language when something feels off.

That asymmetry — sophisticated output, zero professional accountability on the tool's end — is exactly what makes RPC 5.3 compliance harder in practice, and more important.

The Supervision Problem You Can't Outsource

A junior associate who misreads a case will eventually betray their confusion in a conversation. An AI tool will not. It will cite Patel v. Meridian Health System with complete syntactic confidence whether that case exists or not. It will state that New Jersey's statute of limitations for legal malpractice is two years (correct) or three years (incorrect) in the same assured tone. There are no tells.

This is the core challenge of supervising AI: the normal human signals that trigger a supervising attorney's intervention — hesitation, inconsistency, questions, pushback — are entirely absent. The output looks finished because the tool is incapable of looking uncertain.

"Reasonable efforts" under RPC 5.3, in this context, cannot mean simply reviewing the document before it goes out. It has to mean something more systematic.

What a Concrete Supervisory Protocol Actually Looks Like

Here's what RPC 5.3-compliant supervision of an AI drafting tool looks like in a NJ solo or small firm context — broken into a practical framework:

At the task level (every AI-assisted document):

  • Verify every case citation independently. Do not spot-check. Verify every one. Tools like Westlaw, Fastcase, or even a targeted Google Scholar search take minutes and catch hallucinated citations before they reach a judge.
  • Check every statutory reference against the current NJ statute, not the AI's recitation of it. Amendments happen. AI training data has a cutoff.
  • Read AI-generated argument sections for logical coherence, not just surface grammar. A paragraph can be grammatically flawless and legally backwards.
  • Flag any hedged or vague language ("courts have generally held," "it is often the case that") as a red-flag phrase requiring sourcing. These constructions frequently mask the absence of actual authority.

At the matter level (before filing or sending):

  • Run the completed document against the applicable RPC checklist for that matter type. Did the AI inadvertently include advice that strays into areas outside your engagement scope?
  • Confirm that client-specific facts were accurately incorporated, not paraphrased into something subtly incorrect.

At the firm level (weekly cadence):

This is the layer most solos skip — and it's where RPC 5.3's "reasonable efforts" language for firm-wide measures lives.

Set aside 20–30 minutes at the end of each week to review one or two AI-assisted work products from that week in detail — not for the client deliverable, but for the tool's performance. Ask yourself: Where did it hedge? Where did it over-reach? Where did it get the facts right but the framing wrong? Keep a running log. Over time, you'll build a pattern map of where your specific tool underperforms in your specific practice area. That log is also your documentation that you're engaging in active supervision — which matters if a grievance ever lands on the ethics board's desk.

Red Flags in AI Output That Should Stop You Cold

  • Invented citations: Any case you cannot independently verify in Westlaw or Fastcase.
  • Jurisdiction bleed: Federal rules cited in a state-court brief, or another state's procedural standard applied to NJ practice.
  • Overconfident absolutes: Language like "New Jersey courts uniformly hold" or "there is no exception" — these almost always warrant verification.
  • Factual paraphrasing: AI tools sometimes subtly restate your client's facts in ways that shift legal significance. Read every factual recitation against your actual file.
  • Missing defenses or counterarguments: AI optimizes for the argument you prompted. It will not volunteer that opposing counsel has a strong response unless you specifically ask.

The Coaching Gap Is Your Responsibility to Fill

A first-year associate who writes a bad brief gets a conversation, a marked-up redline, and a second chance. An AI tool gets none of that. It will make the same error next time unless you change how you prompt it, constrain it, or structure your review process around its known weaknesses.

That's not a limitation of the tool. That's the professional obligation RPC 5.3 has always described: the lawyer sets the standards, monitors the work, and corrects the course. The fact that your non-lawyer assistant runs on a server doesn't change the analysis.

Build the protocol. Run the weekly review. Document both. Your license — and your client — are depending on it.


Adam Elias is the founder of Elias Advisory LLC, helping solo attorneys and small law firms adopt AI responsibly, secure client data, and operate more efficiently. Questions about your firm's AI supervision practices? Get in touch.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.