Supervising a Non-Lawyer Who Uses AI Is Already Hard — NJ RPC 5.3 Makes It a Disciplinary Issue
Photo by Olivier Leysen on Unsplash
5 min readMay 2, 2026

Supervising a Non-Lawyer Who Uses AI Is Already Hard — NJ RPC 5.3 Makes It a Disciplinary Issue

NJ RPC 5.3Non-Lawyer SupervisionLaw Firm AI Policy

Most NJ solo attorneys have accepted, at least informally, that their paralegals and legal assistants are using AI tools on the job. Maybe it's ChatGPT to draft a demand letter. Maybe it's a legal research tool to pull case citations. Maybe it's something as mundane as an AI-powered email auto-complete.

None of that is inherently problematic. The problem is what happens when an attorney treats that AI use as the non-lawyer's responsibility — and not their own.

Under NJ RPC 5.3, a supervising attorney has an affirmative duty to ensure that the conduct of non-lawyer assistants is compatible with the attorney's own professional obligations. That language has always applied to paralegals drafting documents or making client calls. It now applies, without question, to AI-assisted work product coming out of your support staff.

The rule doesn't care that your paralegal clicked "Generate" and not you.


What RPC 5.3 Actually Demands

NJ RPC 5.3 has three operative layers that solo and small-firm attorneys often collapse into one:

  1. Partners and supervising attorneys must make reasonable efforts to establish policies and procedures that give reasonable assurance non-lawyers behave ethically.
  2. The supervising attorney is responsible for a non-lawyer's conduct if they ordered it, ratified it, or — critically — knew about it and failed to take reasonable remedial action.
  3. In a solo practice, there's no "partner" above you to catch a staff failure. You are the firm. The buck stops at your bar number.

That third point is where NJ's small-firm reality becomes acute. When a large firm partner supervises five associates and a team of paralegals, there are layers of review. When you're a solo with one paralegal and a full caseload, every task that paralegal touches flows directly back to you — and directly into the hands of clients and courts.


The Specific Risks AI Introduces to Non-Lawyer Supervision

AI doesn't just introduce new tools into your paralegal's workflow. It introduces new failure modes you may never see unless you design your supervision to catch them:

Hallucinated citations in research memos. A paralegal using an AI legal research tool may not recognize that a case citation looks plausible but doesn't exist. If that memo feeds into a motion you sign, RPC 3.3's candor obligation activates — and your supervision failure under 5.3 is what enabled it.

Client-facing AI drafts with incorrect facts. AI tools that draft correspondence from a case file can mis-attribute facts, invent procedural history, or use the wrong client name entirely if the context window was poorly structured. Your paralegal may not catch it because the output reads professionally. The client will catch it — or worse, opposing counsel will.

Confidential data routed through consumer AI tools. A paralegal using free-tier ChatGPT to summarize a deposition transcript may not understand that doing so sends client communications to a third-party system with no data processing agreement in place. Your RPC 1.6 exposure is real. Your 5.3 exposure is real. And you may never know it happened unless you built a policy that addresses it.


Building a 5.3-Compliant Supervision Structure for AI Use

You don't need a technology committee. You need three things:

A written AI use policy for your non-lawyer staff. It doesn't have to be long. It needs to specify: which tools are approved, what client data may or may not be input into those tools, what review steps are required before AI-generated content is sent externally, and what the paralegal must flag to you before acting. Even a one-page memo kept in your firm file is evidence of reasonable oversight.

A mandatory review checkpoint for AI-assisted work product. "Just send it over and I'll glance at it" is not a checkpoint. Specifically: any AI-generated draft that will go to a client, opposing counsel, or a court must pass through your hands for substantive review — not a quick signature. Document that you reviewed it.

A standing conversation about AI tools, not a one-time talk. The AI tool landscape changes faster than any policy you write. Make it a regular part of how you work with support staff: what tools they're encountering, what they're being tempted to use, what output surprised them. That conversation is both your supervisory due diligence and your early warning system.


The Discipline Exposure You Should Take Seriously

The NJ Office of Attorney Ethics does not need to find bad intent to sustain a 5.3 violation. It needs to find that your supervision was inadequate given the circumstances. As AI becomes a standard part of paralegal workflows — and it already is — "I didn't know she was using it that way" is going to look less and less like a defense.

The attorneys who are safest aren't the ones who ban AI in their offices. They're the ones who built a supervision structure specific enough to govern it.

That structure doesn't take long to build. But it does require that you build it before the problem surfaces — not after.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.