The NJ Supreme Court Just Drew a Line on AI. Here's How Your Firm Stays On the Right Side.
A Clear Message from Trenton
On January 29, 2024, the New Jersey Supreme Court issued a notice to the bar that should be on the desk of every solo and small firm practitioner in the state. It wasn’t a ban or a sweeping new rule, but something far more significant: a clear and direct reminder of our existing ethical obligations in the age of Artificial Intelligence. The Court’s message can be distilled to a single, critical point: AI is a powerful tool, but you, the attorney, are unequivocally responsible for its use and its output.
For the small firm in Morristown or the solo practitioner in Cherry Hill, this isn't a signal to fear technology. It's a roadmap. The Court has clarified the ethical guardrails, and for firms that navigate them correctly, this guidance provides a powerful opportunity to leverage AI for a distinct competitive advantage.
Mapping the Notice to Your Ethical Duties
The Court’s notice doesn’t exist in a vacuum. It directly implicates several core New Jersey Rules of Professional Conduct (RPCs) that govern our daily practice. Let's break down the most critical connections.
1. RPC 1.1: Competence
Competence is no longer just about knowing the law; it includes understanding the technology used to practice it. The Court’s notice emphasizes that lawyers must have a “sufficient understanding of the AI technology being used.”
- The Risk: Using a generative AI tool without understanding its limitations, potential for “hallucinations” (fabricated information), or the data it was trained on is a direct breach of this duty. Submitting a brief with a fabricated case citation, a now-infamous AI error, is a textbook example of incompetence.
- The Strategy: You don’t need to be a data scientist, but you do need to perform due diligence. Before adopting an AI tool, ask vendors the hard questions: Is this a closed system? What is the data source? What are its known limitations? Competence means choosing the right tool for the right task and understanding its operational boundaries.
2. RPC 1.6: Confidentiality of Information
This is perhaps the most immediate and severe risk for law firms. The Court explicitly warns against inputting “any confidential or privileged information… into a public AI platform.”
- The Risk: Using a free, public version of a tool like ChatGPT for client-related work is akin to discussing a case in a crowded coffee shop. You risk waiving privilege and exposing sensitive client data to be used for the AI's future training. The damage to your client and your firm's reputation could be irreparable.
- The Strategy: Your firm must have a clear policy: no public AI tools for any client work. Period. Instead, invest in enterprise-grade, legal-specific AI platforms that offer private, secure environments. These tools use your firm's data within a secure, isolated instance and contractually guarantee that your information remains confidential.
3. RPC 5.1 & 5.3: Responsibilities of Partners and Supervisory Lawyers
The Court reminds us that attorneys are responsible for ensuring their firm acts in compliance with the RPCs. This supervisory duty extends to both human staff and the technological tools they use.
- The Risk: If a paralegal or junior associate uses a public AI tool improperly, the supervising partner is on the hook. Ignorance is not a defense.
- The Strategy: Treat AI as you would a new legal assistant. It needs to be supervised. Every piece of AI-generated content—whether a research memo, a draft email, or a document summary—must be independently verified by a qualified attorney. Implement a “verify, then trust” workflow. Your professional judgment is the final and most important input.
Your Action Plan: Three Steps to Take This Week
Moving from awareness to action is crucial. Here are three practical steps your New Jersey firm can take right now to align with the Court's guidance:
-
Draft a Simple AI Use Policy (AUP): Create a one-page document that explicitly prohibits the use of public AI platforms for client work. Specify which, if any, vetted, secure AI tools are approved. Make it clear that all AI-assisted work product requires human review and verification. Have every employee read and sign it.
-
Vet Your Vendors: If you are considering an AI tool, your first question shouldn't be about features, but about security. Ask for their data privacy and security policies. Do they have certifications like SOC 2? Where is the data stored? A reputable vendor will have clear, reassuring answers.
-
Start Small and Safe: Begin your AI adoption with low-risk, internal tasks. Use a secure tool to summarize deposition transcripts you already have or to organize your own research files. This builds competence and familiarity in a controlled environment, far from client data or court filings.
The New Jersey Supreme Court's notice isn't a red light; it's a set of directions. By understanding the ethical lines and implementing practical safeguards, solo and small firms can harness the power of AI not just to become more efficient, but to become better, more competent, and more secure fiduciaries for their clients.
Get the weekly roundup
New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.