Inside a NJ Solo Practice That Let AI Draft Its Discovery Responses — What Went Right, What Nearly Didn't
Photo by TheStandingDesk on Unsplash
6 min readMay 6, 2026

Inside a NJ Solo Practice That Let AI Draft Its Discovery Responses — What Went Right, What Nearly Didn't

AI Discovery DraftingNJ RPC 3.3Solo Attorney Workflow

Picture this: a solo family law attorney in Bergen County — call her Elena — starts using an AI drafting tool to generate first-pass responses to interrogatories. She saves four hours on a case that would have eaten most of her Thursday. Then her paralegal flags something odd in one of the responses. The AI had accurately pulled a legal standard — from a New Jersey Appellate Division case that was reversed on appeal six months prior.

The citation wasn't hallucinated. It was real, properly formatted, and dead wrong on the law.

Elena caught it. But the near-miss opened a broader conversation about what it actually means to run a responsible AI-assisted discovery workflow in New Jersey — not just for competence under RPC 1.1, but for candor to the tribunal under RPC 3.3, which very few practitioners are factoring into their AI review protocols.


Why Discovery Is Both the Best and Most Dangerous Place to Start With AI

Discovery drafting is, on the surface, a perfect candidate for AI assistance. The work is repetitive, formulaic, and time-consuming — exactly the kind of task where AI earns its keep in a solo or small firm budget. Interrogatory responses, requests for production, boilerplate objections: these are high-effort, lower-judgment tasks that consume attorney hours without adding proportional value.

But discovery documents aren't internal drafts. They get filed. They get signed under Rule 4:17-1 certifications. In New Jersey, your signature on interrogatory answers carries an implicit representation that the responses are accurate to the best of the client's knowledge — and that the legal objections you've interposed are grounded in actual authority.

That's where RPC 3.3 enters in a way most attorneys overlook. When an AI-generated objection cites a case or rule to support a privilege or relevance objection, that citation goes out under your name. If it's wrong — outdated, reversed, misapplied — you've potentially made a false statement of law to the tribunal. Not intentionally, but the standard under RPC 3.3(a)(1) doesn't require intent for it to be a problem.


The Three Workflow Gaps Elena Almost Missed

After walking through her process in detail, three gaps became clear. They're worth naming because they're likely present in most small firms experimenting with AI in discovery.

1. No citation verification step. Elena's workflow included a human review of the AI's prose — tone, accuracy of facts, whether the client's answers were complete. What it didn't include was a dedicated pass to verify every legal citation the AI dropped into objection templates. She assumed the AI's legal knowledge was current. It wasn't, on that case. A simple Fastcase or Westlaw KeyCite check, built into the review checklist as a mandatory step, would have caught it in sixty seconds.

2. The AI was trained on her old templates. She had uploaded previous discovery responses to "train" the tool's style. The problem: those old templates included objection language she had stopped using because a local court had frowned on it in a discovery conference. The AI faithfully reproduced it. Garbage in, garbage out — except with a law license attached.

3. No version control on AI outputs. When a dispute arose with opposing counsel over whether a particular objection had been asserted in the original responses, Elena had difficulty reconstructing which version of the AI-generated draft had actually been sent. A basic naming convention — smith_rog_responses_v3_human-reviewed_final.docx — and a simple log of what was AI-generated versus attorney-edited would have eliminated the confusion entirely.


What a Defensible AI Discovery Workflow Actually Looks Like

For NJ solo attorneys using AI in discovery, here's the minimum viable protocol to stay defensible under RPC 1.1 (competence), RPC 3.3 (candor), and the New Jersey Supreme Court's broader expectations around attorney supervision of technology:

  • Generate, don't submit. Treat every AI output as a first draft requiring attorney editing, not a near-final requiring a proofread.
  • Mandatory citation audit. Every case, statute, or rule cited in an AI-generated objection or legal argument gets KeyCited or Shepardized before it leaves the office. No exceptions.
  • Scrub your training data. If you're feeding old templates into an AI tool, review those templates first. Don't let stale practice habits get amplified at scale.
  • Version and log. Keep a simple log noting which portions of a document were AI-assisted. This protects you in disciplinary proceedings and in meet-and-confer disputes.
  • Client fact verification. For interrogatory answers specifically, the AI can draft — but the client must review and verify every factual assertion before the certification is signed.

The Bigger Lesson

Elena's near-miss wasn't a failure of AI. It was a failure of workflow design. The tool did what it was built to do. The gap was in how the output was integrated into a process that still had attorney-grade accountability standards attached to it.

That's the real competence question NJ solo attorneys need to sit with: it's not whether you're using AI in discovery, it's whether your process around that use would hold up if the New Jersey Office of Attorney Ethics came asking. Build the workflow first. Then let the AI loose inside it.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.