When an AI Hallucination Ends Up in a NJ Court Filing, Who Answers to the Tribunal?
By now, most NJ attorneys have heard about the cautionary tales — lawyers sanctioned in federal courts for submitting briefs packed with fabricated case citations generated by ChatGPT. What's easy to miss, sitting safely behind those headlines, is how close the same disaster lurks for any NJ solo or small-firm attorney who uses a generative AI tool to assist with legal research or brief drafting without a deliberate verification workflow in place.
This isn't a distant federal problem. It's an RPC 3.3 problem. And in New Jersey, that duty is unambiguous.
What RPC 3.3 Actually Says — and Why AI Makes It Harder
New Jersey's RPC 3.3(a)(1) prohibits a lawyer from making a false statement of fact or law to a tribunal. Subsection (a)(3) goes further: a lawyer who has offered material evidence and later learns it was false must take reasonable remedial measures, including disclosure to the tribunal.
The rule doesn't care why the false statement got there. It doesn't matter that a large language model confidently hallucinated a citation to Rivera v. State, 289 N.J. Super. 412 (App. Div. 2019) — a case that simply does not exist. The moment your signature goes on that brief, you own every word of it. The AI is not a co-counsel. It is not admitted to the New Jersey bar. It has no license to lose.
What makes this particularly treacherous is the confidence problem: AI-generated hallucinations rarely look like guesses. They look like real citations — complete with reporter volume, page number, court, and year. They are formatted precisely the way a citation should be. That surface credibility is what catches attorneys off guard.
The NJ Exposure Is Real and Growing
New Jersey courts have not yet produced a widely publicized sanctions order stemming directly from AI hallucinations, but the conditions for one are fully in place. NJ attorneys are using AI tools at increasing rates for brief writing, motion practice, and legal research. The New Jersey Judiciary has not issued specific standing orders governing AI use in filings the way some federal districts have — which some practitioners interpret as a green light. It is not.
The absence of a specific AI disclosure rule in NJ state courts does not suspend RPC 3.3. If anything, it shifts the entire burden back to the attorney's own professional judgment and verification process.
Building a Hallucination-Detection Workflow That Actually Works
Here is a practical, four-step verification protocol that any NJ solo attorney can implement today — regardless of which AI tool they use:
1. Never cite from AI output directly. Treat every case citation generated by an AI tool as an unverified lead, not a source. Your rule: no citation enters a draft brief until you have personally pulled the full text of the decision from Westlaw, Lexis, Fastcase, or the NJ Courts' own free opinion archive.
2. Verify the proposition, not just the existence. Many attorneys make the mistake of confirming that a case exists and stopping there. That's not enough. Open the opinion. Confirm that it actually stands for the proposition the AI used it to support. AI tools frequently real case citations to wrong propositions — the case exists, but it says something different or even the opposite of what the AI claimed.
3. Run a citator check on every case. Overruled precedent in a NJ brief is its own RPC 3.3 problem. Confirm via KeyCite or Shepard's that the case remains good law before it goes into your filing.
4. Build a verification log. For every brief that involved AI-assisted research, maintain a simple internal log: case name, citation, where you verified it, date verified, and which proposition it supports. This takes minutes and creates a defensible record if a court or grievance committee ever asks how you supervised the AI's output.
The Supervisory Dimension
There's also an RPC 3.3 issue that compounds when you delegate the AI research task to a paralegal or junior staff member who then gives you a draft you sign without reviewing citations independently. Your signature on the brief is your certification. The fact that a non-lawyer used the tool upstream doesn't create a buffer — it creates a supervision failure layered on top of a candor problem.
If a staff member is using AI for legal research in your NJ practice, that verification workflow above needs to be their workflow too — documented, trained, and spot-checked by you.
The Practical Bottom Line
AI research tools are genuinely useful for issue spotting, generating initial outlines, and drafting argument structure. They are not reliable citators. They were not built to be. Using them as if they were — without a rigorous human verification step between AI output and court filing — is a direct path to a RPC 3.3 violation, potential sanctions, and a grievance.
Build the habit now, before you're reading your own name in a sanctions order. A 15-minute citation verification routine is a small price for keeping your New Jersey bar license intact.
Get the weekly roundup
New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.