Deepfake Dilemma: AI Evidence Is Showing Up in Court

Judges fear AI-generated deepfakes are eroding trust in the legal system after fake video evidence was caught in a California courtroom.
Deepfake Dilemma: AI Evidence Is Showing Up in Court

Key Takeaways:

  • Judges are growing increasingly concerned as AI-generated deepfakes begin appearing as evidence in courtrooms, threatening the integrity of the legal process.
  • In a landmark instance, a California judge dismissed a case after identifying a “deepfake” video submitted as authentic evidence by the plaintiffs.
  • Legal experts and judicial committees are now debating whether existing rules are sufficient to handle AI-generated forgeries, with some states acting faster than federal bodies.
  • The rise of realistic fake audio, video, and documents could erode the foundation of trust upon which courts rely for fact-finding.

A California housing dispute took a startling turn when Judge Victoria Kolakowski noticed something was off with a video submitted as Exhibit 6C. The witness’s voice was monotone, her face was fuzzy, and her expressions twitched and repeated. Her conclusion: the video was an AI-generated “deepfake.”

This case, Mendones v. Cushman & Wakefield, Inc., marks one of the first known instances where a suspected deepfake was submitted as genuine evidence and detected by a judge. Citing the plaintiffs’ use of AI to create fake material, Judge Kolakowski dismissed the case entirely, sending a clear signal about a much larger threat facing the justice system.

A New Era of Forgery

The rapid advancement of generative AI has judges worried that a flood of hyperrealistic fake evidence could overwhelm their courtrooms. NBC News spoke with several judges and legal experts who warned that this technology—capable of producing convincing videos, images, documents, and audio—could undermine the core mission of the courts: finding the truth.

“I think there are a lot of judges in fear that they’re going to make a decision based on something that’s not real,” said Judge Stoney Hiljus of Minnesota’s 10th Judicial District.

The potential for misuse is vast. Judge Scott Schlegel of Louisiana’s Fifth Circuit Court of Appeal described a chilling hypothetical: a person could easily clone their spouse’s voice to create a fake threatening message. “The judge will sign that restraining order. They will sign every single time,” Schlegel warned, highlighting how easily AI could lead to devastating real-world consequences.

The Search for a Legal Shield

In response, a small group of judges is leading the charge to address the threat. A consortium involving the National Center for State Courts and the Thomson Reuters Institute has created a “cheat sheet” to help judges spot deepfakes. It advises them to question the evidence’s origin, determine who had access to it, and look for corroborating proof.

However, updating the federal rules of evidence is proving to be a slow process. A proposal to create new rules specifically for AI was considered in May by the U.S. Judicial Conference’s Advisory Committee on Evidence Rules but was not approved. Committee members argued that “existing standards of authenticity are up to the task,” a decision that some experts find concerning given the fast pace of AI development.

Putting the Onus on Attorneys

While federal rules lag, some states are taking action. Louisiana, for example, passed Act 250, which requires attorneys to use “reasonable diligence” to verify whether the evidence they submit has been generated by AI.

“The courts can’t do it all by themselves,” said Judge Schlegel. “If it doesn’t smell right, you need to do a deeper dive before you offer that evidence into court.”

Technology itself may offer some defense. In the California case, metadata from the deepfake video revealed it was supposedly filmed on an iPhone 6—a device that lacked the technical capabilities needed for what the video depicted. This discrepancy helped confirm the forgery.

As the justice system grapples with this new reality, legal experts suggest a fundamental shift in mindset is necessary. Maura R. Grossman, a research professor and lawyer, put it bluntly: “We’re really moving into a new paradigm. Instead of trust but verify, we should be saying: Don’t trust and verify.”

Image Referance: https://www.nbcnews.com/tech/tech-news/ai-generated-evidence-deepfake-use-law-judges-object-rcna235976