RPC 1.1 Demands AI Competence. Are New Jersey Lawyers Ready?
6 minute readApril 14, 2026

RPC 1.1 Demands AI Competence. Are New Jersey Lawyers Ready?

Legal AINew Jersey LawLegal Ethics

The Two-Word Question Paralyzing NJ Law Firms: “What If?”

Walk into any bar association event from Cape May to High Point, and the conversation eventually turns to AI. The excitement is palpable. But for solo and small firm lawyers in the Garden State, it's quickly followed by a wave of anxiety, best summarized by a single question: “What if?”

What if I use AI to draft a brief and it hallucinates a case? What if I input client details into a chatbot and breach confidentiality? What if my ethical duties under the New Jersey Rules of Professional Conduct (RPCs) prevent me from using these powerful tools at all?

This hesitation is understandable, but it's becoming professionally hazardous. The reality is that our ethical rules don't forbid AI; they demand we engage with it competently and securely. For the modern NJ attorney, understanding AI is no longer an elective—it's a core component of your duties of competence (RPC 1.1) and confidentiality (RPC 1.6).

RPC 1.1: Competence is More Than Just Knowing the Law

New Jersey’s RPC 1.1 on Competence requires lawyers to provide clients with “the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.” Crucially, the official comment clarifies that lawyers must “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.

For years, “relevant technology” meant email security and e-discovery software. Today, it unequivocally includes artificial intelligence.

Fulfilling this duty in the age of AI means more than just having a ChatGPT account. It means understanding the fundamental limitations of the technology you use.

  • The Hallucination Risk: Generative AI models can, and do, invent facts, statutes, and case law with absolute confidence. Using an AI-generated legal argument without independent, manual verification is not just sloppy—it’s a direct path to a competence violation and potential sanctions.
  • The Verification Imperative: Competent use of AI requires a strict “human-in-the-loop” protocol. Treat the AI as a first-year associate: it can produce a rough draft, but every assertion, every citation, and every legal conclusion must be rigorously checked against a reliable source like Lexis, Westlaw, or official court records. Blindly trusting the output is professional malpractice waiting to happen.

Actionable Step: Implement a simple, firm-wide policy: AI can be used for brainstorming, summarizing, and initial drafting, but all outputs must be independently verified by a qualified attorney before being included in any work product.

RPC 1.6: Your Client's Secrets in a Public Machine

Even more perilous is the risk to client confidentiality under RPC 1.6. When you type a query into a free, public AI tool, you are sending that data to a third-party server. In most cases, you are also granting that company a license to use your data to train its future models.

Uploading a client’s sensitive business contract for summarization or inputting confidential settlement details to draft a demand letter is functionally equivalent to discussing the case in a crowded coffee shop. The data leaves your control, attorney-client privilege is likely waived, and you have committed a serious ethical breach.

Actionable Step: Forbid the use of any public, consumer-grade AI tool for any task involving non-public client information. Instead, invest in a legal-specific, enterprise-grade AI platform. When vetting these vendors, ask them one critical question: “What is your data retention policy?” The only acceptable answer is “zero-retention,” meaning they do not store your prompts or use your firm’s data for any other purpose.

The Path Forward: From Paralysis to Practical Application

Ethical AI adoption isn't about avoidance; it's about establishing guardrails. For a solo or small firm in New Jersey, the path forward is clear and manageable:

  1. Educate: Take a CLE or webinar on the basics of Large Language Models. Understanding what the tool is doing is the first step toward using it competently.
  2. Establish Policy: Draft a one-page AI Usage Policy. Specify approved, secure tools and expressly prohibit the use of public models for client work. This demonstrates diligence.
  3. Vet Your Vendors: Before you buy, read the terms of service. Ensure the vendor has a zero-retention policy and robust security protocols. Put their data-handling promises in writing.

By reframing the AI conversation from “What if?” to “Here’s how,” New Jersey’s small firms can do more than just avoid ethical pitfalls. They can leverage this technology to operate more efficiently, deliver better client service, and build a more resilient practice for the future.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.