
(DailyVantage.com) – AI is now slipping fake law into criminal prosecutions—and judges are starting to catch it, raising hard questions about due process when the government can’t even verify its citations.
Story Snapshot
- No verified case matches the viral claim of a prosecutor being caught “in real time” fabricating a case with AI during live courtroom proceedings.
- Two documented episodes show prosecutors submitting filings containing AI-style “hallucinated” case citations, triggering withdrawals, sanctions, and dismissal fights.
- Nevada County, California, saw a prosecutor withdraw a brief after false citations appeared; defense attorneys allege the office did not fully disclose AI use when questioned.
- Kenosha County, Wisconsin, involved a judge striking a prosecutor’s brief over undisclosed AI use and false citations, with the underlying case dismissed mainly on probable-cause grounds.
Viral “Caught in Real Time” Claims Don’t Match the Verified Record
Online posts and clickbait headlines suggest a dramatic, on-camera moment where a judge catches a prosecutor using AI to “make up” her case on the spot. The available reporting doesn’t support that exact scenario. What does exist is more bureaucratic—and arguably more concerning: prosecutors filing briefs with fabricated case citations that look legitimate until someone checks the cases. Those errors get discovered after submission, often through judicial review or defense challenges.
The distinction matters for accountability. A live courtroom takedown can be clipped, shared, and quickly corrected in public. A bad brief with fake citations can quietly shape rulings if no one catches it, especially in busy trial courts where judges and defense counsel are swamped. The constitutional concern isn’t “embarrassment”—it’s whether defendants face loss of liberty based on arguments grounded in non-existent legal authority.
California: Withdrawn Filing, Disputed Disclosure, and a Petition for Sanctions
In Nevada County, California, reporting describes a prosecutor’s filing in a felony drug case that included AI-generated erroneous citations. The district attorney’s office withdrew the brief after the errors were discovered, and the DA publicly attributed at least one inaccurate citation to AI use. Defense attorneys, including Civil Rights Corps and the public defender, argue the problem extended beyond one case and say the office avoided clear disclosure despite a judge’s questions.
The same reporting lays out a tug-of-war over how to interpret the pattern. The district attorney’s side has characterized some mistakes as human error, while the defense side argues the citations bore the “hallmarks” of AI hallucinations and that nondisclosure prevents courts from applying appropriate scrutiny. The California Supreme Court had not ruled on the relevant petition as of the described developments, leaving unanswered what discipline—if any—will follow.
Wisconsin: A Judge Strikes the Brief as the Court Enforces AI-Disclosure Rules
In Kenosha County, Wisconsin, a judge sanctioned a prosecutor’s response brief after it contained hallucinated citations and the AI use was not disclosed as required under local expectations. The episode is closer to what viewers imagine when they hear “caught in real time,” because the judge’s reaction played out in the courtroom process and the filing was struck. Even so, the dismissal of the underlying case was described as primarily tied to probable cause rather than AI alone.
The practical takeaway is that courts are beginning to treat undisclosed AI assistance as more than a quirky tech problem. If a local rule or judicial directive requires AI disclosure, ignoring it can trigger immediate consequences: the brief can be struck, credibility can be damaged, and the court may schedule further hearings. That is a major shift from the early “oops” phase of AI mistakes, when judges often responded with warnings and embarrassment rather than procedural penalties.
What This Means for Due Process—and Why Conservatives Should Care
Prosecutors wield the power of the state: charging decisions, plea leverage, and recommendations that can determine whether a citizen keeps his job, his firearm rights, or his freedom. When that power is paired with unverified AI-generated legal claims, the risk isn’t abstract. The immediate harm is the possibility of a judge relying on fake precedent, while the broader harm is erosion of trust in a justice system that is supposed to be rule-bound and evidence-driven.
Watch Prosecutor Get Caught by Judge in Real Time for Using AI to Make Up Her Casehttps://t.co/4BRUYba8ye
— PJ Media (@PJMedia_com) March 23, 2026
Courts and policymakers now face a narrow but urgent question: will they demand verifiable, human-checked work product from government attorneys the same way they expect it from everyone else? Some jurisdictions are moving toward disclosure rules, but the patchwork approach leaves gaps. The available reporting also shows a limitation: the most sensational “real-time” viral framing is not confirmed by the sources, yet the underlying problem—fabricated citations in criminal cases—is real enough to warrant scrutiny.
Sources:
California Prosecutor Says AI Caused Errors in Criminal Case
Federal Court Rules Client’s AI…
Federal prosecutor resigns after AI errors found in court filings
California Courts Send Clear Message: AI Shortcuts Have Serious Consequences
Copyright 2026, DailyVantage.com














