A Ticking Clock for NJ Lawyers: Is Your Firm's AI Use Violating Client Confidentiality?
Photo by Tyler on Unsplash
6 min readApril 9, 2026

A Ticking Clock for NJ Lawyers: Is Your Firm's AI Use Violating Client Confidentiality?

AIEthicsNew Jersey

The Two-Minute Task That Could Cost Your NJ Practice Everything

It’s a common scene in small law firms across New Jersey. An associate, pressed for time, needs to rephrase a complex clause from a client's contract for an internal memo. They copy the text, paste it into a free, public AI chatbot, and ask it to “simplify this.” Two minutes later, they have a perfect summary. The task is done. The risk, however, has just begun.

This seemingly harmless act of seeking efficiency could place your firm in direct violation of the New Jersey Rules of Professional Conduct (RPCs), specifically the bedrock principle of client confidentiality outlined in RPC 1.6. For solo and small firm practitioners in our state, the rapid adoption of generative AI isn't just a technological shift; it's a new and complex ethical minefield. The question is no longer if you will use AI, but how you will govern its use to protect your clients, your reputation, and your license.

Deconstructing the Breach: RPC 1.6 in the Age of AI

New Jersey’s RPC 1.6(a) is unequivocal: “A lawyer shall not reveal information relating to representation of a client unless the client gives informed consent, the disclosure is impliedly authorized in order to carry out the representation, or the disclosure is permitted by paragraph (b).”

When your staff—or you—input any client-related information into a public AI model like the free versions of ChatGPT, Claude, or Gemini, you are, by definition, revealing it to a third party. The terms of service for these public tools often grant the provider the right to use your inputs to train their models. That client data, even if anonymized with a cursory effort, is no longer under your control. It's being processed on external servers and potentially absorbed into the model's vast knowledge base.

This isn't a hypothetical risk. It’s a data transfer. And without explicit, informed client consent, it’s a clear breach of your duty of confidentiality. The convenience gained is dwarfed by the professional liability created.

The Ripple Effect: How One Mistake Cascades Through the RPCs

The danger doesn't stop with confidentiality. A failure to manage AI properly implicates several other critical RPCs for New Jersey lawyers:

  • RPC 1.1 (Competence): The duty of technological competence is no longer optional. Using a tool without understanding its data privacy implications is arguably a failure to provide competent representation in the modern legal environment.
  • RPC 5.1 & 5.3 (Supervisory Duties): As a partner or supervising attorney, you are responsible for the conduct of your associates, paralegals, and staff. If a junior associate commits an ethical breach using AI, the responsibility ultimately lies with the firm's leadership for failing to implement adequate policies and training. An “I didn’t know they were using it” defense will not suffice.
  • RPC 1.4 (Communication): Do your clients understand that their sensitive legal matters might be processed by a third-party AI? A strong argument can be made that using such tools for substantive work requires transparent communication and, in many cases, informed consent, which should be documented in your engagement letter.

A Practical Governance Plan for the NJ Small Firm

Avoiding this ethical pitfall doesn't mean abandoning AI's transformative potential. It means adopting it with intention and control. Here is a straightforward, actionable framework for your firm:

  1. Declare an Immediate Moratorium: Pause all use of public, consumer-grade AI tools for any task involving client information. This is your most critical first step to stop potential data leaks.

  2. Draft a Clear AI Use Policy: Create a simple, one-page document that outlines what tools are forbidden, which are approved, and the guiding principles for AI use. It should state clearly that no confidential client information is to be entered into any public AI system.

  3. Invest in Secure, Private AI: The solution isn't to ban AI, but to use the right version of it. Shift your focus to enterprise-grade, private AI solutions. Look for platforms that offer a Business Associate Agreement (BAA) and guarantee “zero data retention”—meaning your prompts and data are never used for model training. Tools like Microsoft Copilot (with commercial data protection) or dedicated legal AI platforms built on secure, private instances of large language models are designed for this purpose.

  4. Train Your Entire Team: An AI policy is useless if it sits in a folder. Conduct a mandatory training session explaining the “why” behind the rules. Use concrete examples to illustrate the risks of RPC violations and demonstrate the proper workflow for using the firm's approved, secure tools.

Don’t wait to become a cautionary tale for the New Jersey Office of Attorney Ethics. The path forward requires proactive governance, not fearful avoidance. By establishing clear policies, investing in secure technology, and fostering a culture of digital diligence, your firm can leverage the power of AI to thrive, secure in the knowledge that your most fundamental ethical duties are being upheld.

Get the weekly roundup

New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.