What Happens When Your NJ Paralegal Uses ChatGPT Without Telling You
Photo by Conny Schneider on Unsplash
6 min readApril 21, 2026

What Happens When Your NJ Paralegal Uses ChatGPT Without Telling You

NJ RPC 5.3law firm AI policynon-lawyer AI supervision

You didn't authorize it. You didn't know about it. But last Tuesday, while you were in a deposition, your paralegal fed a client's divorce financial disclosures into ChatGPT to draft a settlement summary — because it was faster, and no one told her not to.

This is not a hypothetical. It is the single most common AI compliance failure point I see inside NJ solo and small firm practices right now. Not rogue attorneys making bold choices. Support staff making quiet, efficient ones.

And under New Jersey's RPCs, that's your problem.


The Supervision Gap Nobody Talks About

RPC 5.3 is unambiguous: attorneys are responsible for ensuring that non-lawyer staff comply with the Rules of Professional Conduct. RPC 5.1 extends similar accountability upward — supervising attorneys and firm partners bear responsibility for the conduct of lawyers they oversee. Together, these rules create a chain of accountability that runs directly through you, regardless of who actually touched the keyboard.

The practical implication is blunt: if your paralegal used a non-approved AI tool on a client matter and you had no policy prohibiting it, you cannot disclaim responsibility. The absence of a rule is itself a supervisory failure.

What makes this particularly acute in 2025 is the proliferation of consumer-grade AI tools — ChatGPT, Google Gemini, Microsoft Copilot in its free tier — that are one browser tab away from every person on your staff. These tools are not bound by a Business Associate Agreement with your firm. They are not subject to your data retention controls. And in many configurations, they are actively used to improve the underlying model.

Your client's confidential financial data just became a training signal. RPC 1.6 doesn't care that your paralegal meant well.


What "Reasonable Supervision" Actually Looks Like in 2025

The NJSBA and the broader ABA guidance on AI competence have made clear that "reasonable measures" under RPC 5.3 must now specifically account for AI tool usage by non-lawyer staff. A supervision framework that was adequate in 2021 — when the staff AI risk was limited to maybe a Grammarly subscription — is not adequate today.

Here is what a defensible RPC 5.3 posture requires at a minimum:

1. An explicit, written AI use policy for staff — separate from your attorney policy. Your firm AI policy should have a non-lawyer annex. It should name specific approved tools, prohibit unapproved tools by category (consumer LLMs, free-tier AI assistants, browser-based AI autocomplete), and explain why the prohibition exists. Staff who understand the reason — client confidentiality, data sovereignty — are more likely to internalize it than staff who only see a list of forbidden apps.

2. Onboarding acknowledgment and annual re-attestation. Staff should sign that they have read and understood the AI use policy. This isn't bureaucratic theater — it creates a record that the supervisory duty was discharged, and it prompts an annual conversation as the AI tool landscape shifts.

3. A clear escalation path for "can I use this?" questions. One of the most underrated failure modes in small firms is the absence of a designated decision-maker for tool questions. If your paralegal isn't sure whether she can use a new AI drafting tool, who does she ask? If the answer is "I guess I'll just try it," your supervision framework has a hole in it.

4. Output review protocols for AI-assisted work product. Under RPC 5.3, supervision isn't just about preventing unauthorized tool use — it's about reviewing what non-lawyers produce. If a paralegal drafts a motion summary using an approved AI tool, that output needs attorney review before it goes anywhere near a client file or a court submission. Build this into your workflow explicitly, not as an assumption.


The Harder Conversation: Are Your Staff Actually Afraid to Ask?

Here's the uncomfortable truth I've observed in practice: in many small NJ firms, support staff use unauthorized AI tools not because they're indifferent to the rules, but because the culture signals that asking questions about efficiency tools is unwelcome. Attorneys who are already stretched thin don't want to hear "can I use this app?" They want the work done.

That dynamic is a compliance liability. Effective RPC 5.3 supervision requires a firm culture where staff feel safe surfacing tool questions — and where the attorney's answer is a real evaluation, not reflexive dismissal.

The firms that get this right tend to hold a brief, recurring "AI check-in" — 15 minutes, monthly — where staff can raise tools they've encountered, workflows that feel inefficient, and questions about what's approved. It's a low-overhead way to stay ahead of the gap between what your staff is using and what you've authorized.


A Practical Starting Point

If you don't have a non-lawyer AI use policy in place, draft one this week. It doesn't need to be long. It needs to:

  • Define "AI tools" broadly (including autocomplete, browser extensions, and free-tier LLMs)
  • Designate which tools are approved for which task types
  • Prohibit inputting client identifying information into any unapproved tool
  • Specify who approves new tools and how quickly requests will be answered
  • Require attorney review of any AI-assisted work product before use

RPC 5.3 has always required you to supervise your staff. AI just raised the stakes of what unsupervised looks like.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.