The Real Cost of AI Hallucinations in Legal Briefs
In March 2026, an Oregon attorney faced a $109,000 sanction. The reason? An AI-generated legal brief containing citations to cases that didn't exist. Not cases he misremembered. Not cases he found but misunderstood. Cases that were entirely fabricated by an artificial intelligence system.
This wasn't a fluke. It was a preview of where we're heading if AI use in legal practice isn't handled with extreme care.
What Happened in That Oregon Case
The details matter because they're instructive. The attorney used an AI system to research and draft sections of a brief. The system generated case citations and summaries. The attorney, trusting the AI's confidence, submitted the brief without verifying the underlying cases.
The opposing counsel checked the citations. The cases didn't exist. Not only did they not support the attorney's argument—they didn't exist at all. The AI had created them.
The court's response was swift and severe. $109,000 in sanctions. A public reprimand. Potential bar action. For one attorney's failure to verify AI output.
The cost wasn't just financial. It was reputational, professional, and psychological.
Why AI Hallucinations Happen
This is the part that keeps compliance-minded attorneys up at night. Large language models—the AI systems that generate legal research and briefs—are trained to be confident and conversational. They're not trained to say "I don't know" when they don't know.
When a system doesn't have reliable information about a topic, it doesn't raise a flag. It generates plausible-sounding text that fits the pattern of what you'd expect. It hallucinates.
In legal research, this is catastrophic. A hallucinated case citation looks exactly like a real one. It has a case name, a volume number, a reporter cite. It reads like the system knows what it's talking about. But it's completely made up.
The scariest part? The AI is often most confident about the things it's least accurate about. Confidence and correctness aren't correlated. In some cases, they're negatively correlated.
The Ripple Effects
The Oregon case is just one example. We're seeing similar patterns:
Fabricated statutes: AI systems creating citations to laws that don't exist or misquoting existing statutes.
False holdings: Cases described as establishing legal principles they don't actually establish, or attributed to jurisdictions that never decided them.
Invented precedent: Legal arguments built on entire cases that were hallucinated by an AI.
Each of these creates liability for the attorney who relies on it without verification. And each one is preventable.
The Competency and Care Standard
Here's the legal principle that matters: you remain responsible for your work, regardless of the tools you use.
The ethics rules are clear. Under ABA Model Rule 8.1 (and New Jersey's version of the same rule), attorneys have competency obligations. These obligations don't disappear because you used AI. In fact, they're heightened. You need to understand the tools you're using and their limitations.
Under Model Rule 8.4, you can't engage in conduct prejudicial to the administration of justice. Submitting false citations is exactly that. Using AI without verification that results in false citations? That's sanctionable.
Courts are treating this seriously because the stakes are serious. A false citation isn't a typo. It's an attempt to convince a court of a legal principle that doesn't exist. It undermines the entire judicial system.
Building a Verification Protocol
So how do you use AI for legal research and briefs without ending up like the Oregon attorney?
Never rely on AI legal citations without verification: This is the golden rule. Every single case citation that comes from an AI system gets checked against primary sources. Every statute reference gets verified. This takes time, but it's non-negotiable.
Use AI for research organization, not final analysis: AI excels at summarizing information and organizing cases into categories. Use it for that. But the actual legal analysis—the argumentation, the precedent application—comes from you, with human-verified sources.
Create a verification checklist: Before any brief leaves your office, every AI-assisted section has been checked against primary sources. Make this a documented process. This protects you and shows you have a system.
Know the difference between different AI tools: GPT-4 and Claude have different training datasets and different accuracy profiles. Specialized legal AI systems (like LexisNexis+ with AI or Westlaw's AI-Assisted Research) have different reliability than general-purpose systems. Know the tool you're using. Know its limitations.
Spot-check critical arguments: If an argument seems crucial to your case, and it's based on AI research, verify it yourself even if the citation checks out. Read the underlying case. Make sure the AI's characterization of the holding is accurate.
The Solo Practice Reality
You might be thinking: "I don't have time to verify every AI output. I use AI to save time." Fair point.
But here's the reality: the time you save using AI without verification is nothing compared to the time you'll lose dealing with sanctions, ethics complaints, or malpractice claims.
The attorneys who are successfully integrating AI into solo practices are building verification into their process from the start. They've identified specific tasks where AI adds value even with verification time included. They're realistic about which gains are genuine and which are illusory.
For a solo attorney, the smart play is:
- Use AI for legal research organization and initial analysis
- Use AI for summarizing depositions and witness statements
- Use AI for contract language analysis (with your review)
- Use AI for legal writing drafting (with substantial revision and verification)
- Do not use AI as your primary research tool without verification
When you do use it, you're faster than you would be without any AI. But you're slower than someone recklessly relying on AI without checking. And crucially, you're safe.
The Bigger Picture
The Oregon $109,000 sanction is becoming a baseline. Courts are signaling that hallucinated citations are serious. Really serious.
We're already seeing bar associations and ethics opinions starting to address AI more formally. Some states have issued formal guidance. Others are updating their ethics rules. The trend is clear: competent AI use is not negotiable.
The question for practitioners isn't whether to use AI anymore. It's whether to use it competently or incompetently. And that choice has consequences that run from professional reputation to financial liability to bar discipline.
Your AI Research System
If you're going to use AI for legal research—and there's no reason not to—build the verification system first.
Document it. Use it consistently. Update it as tools and your understanding of their limitations improve.
The attorneys who are thriving with AI are the ones who added guardrails, not the ones who threw caution away.
The $109,000 sanction was the cost of not doing that. Learn from it.
Get the weekly roundup
New AI Sidebar articles delivered to your inbox. No spam, unsubscribe anytime.