Morbi et tellus imperdiet, aliquam nulla sed, dapibus erat. Aenean dapibus sem non purus venenatis vulputate. Donec accumsan eleifend blandit.

Get In Touch

When AI Outs Itself: The Bizarre Case of Minnesota’s Anti-Deepfake Declaration

  • Home |
  • When AI Outs Itself: The Bizarre Case of Minnesota’s Anti-Deepfake Declaration
When AI Outs Itself The Bizarre Case of Minnesota’s Anti-Deepfake Declaration

 

In a twist fit for a sci-fi courtroom drama, a federal lawsuit challenging Minnesota’s new law against deepfake election interference has taken an unexpected turn: allegations that an affidavit defending the law may itself be the product of artificial intelligence.

The law, formally titled the “Use of Deep Fake Technology to Influence an Election” statute, seeks to curb AI-generated content in election-related misinformation. Ironically, a key document supporting the legislation has been accused of containing “hallucinated” references — fake citations attributed to academic studies that don’t seem to exist.

The affidavit, submitted by Jeff Hancock, a prominent Stanford University academic and director of its Social Media Lab, aimed to bolster the state’s case. Minnesota Attorney General Keith Ellison commissioned Hancock to weigh in on the dangers of deepfakes. However, it’s now at the center of a growing controversy, as plaintiffs in the lawsuit allege the affidavit was partially or wholly generated by a tool like ChatGPT.

Nonexistent Citations Raise Red Flags

Two citations in Hancock’s affidavit have sparked particular scrutiny. One refers to a purported 2023 study in the Journal of Information Technology & Politics titled “The Influence of Deepfake Videos on Political Attitudes and Behavior.” But when reporters at the Minnesota Reformer searched for this study, they came up empty. Not only was the study missing from the cited journal, but it also didn’t appear in any academic database.

The second questionable citation, “Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance,” was equally elusive. No record of such a paper exists in any known repository of scholarly work.

This led to accusations from the plaintiffs — Minnesota state representative Mary Franson and conservative YouTuber Christopher Khols, also known as “Mr. Reagan.” Their legal team has questioned the affidavit’s integrity, arguing that it “bears the hallmarks of an AI hallucination.”

“Plaintiffs do not know how this hallucination wound up in Hancock’s declaration, but it calls the entire document into question,” their filing stated. They further criticized the affidavit for lacking a detailed methodology and substantive analytic framework.

The AI Connection

So, what is an “AI hallucination,” and how did it potentially find its way into a legal filing? AI hallucinations occur when large language models (LLMs) like ChatGPT generate convincing but entirely false information. When prompted, these models are trained on massive datasets and attempt to stitch together plausible responses based on patterns in the data. However, when faced with gaps in knowledge or overly complex prompts, they may fabricate information, including citations or sources.

Critics argue that Hancock’s affidavit fits the profile. The citations’ suspiciously polished yet unverifiable nature strongly suggests that an AI tool might have played a role.

If confirmed, the affidavit’s AI origins would be particularly ironic given the lawsuit’s very purpose—to address the potential misuse of AI technologies like deepfakes in political contexts.

The Larger Legal Battle

When AI Outs Itself The Bizarre Case of Minnesota’s Anti-Deepfake Declaration

Minnesota’s law, enacted in 2023, is one of the nation’s first attempts to regulate the use of deepfake technology in elections. It criminalizes the dissemination of AI-generated images, audio, or videos designed to deceive voters within 60 days of an election. Proponents of the law argue that it’s a necessary step to safeguard democracy in an era of increasingly sophisticated digital manipulation.

Opponents, including Franson and Khols, contend the law infringes on free speech and creates a chilling effect on legitimate political expression. Their lawsuit argues that the statute is overly broad and unconstitutionally vague.

The affidavit in question was intended to provide expert insight into the impact of deepfakes on public opinion, drawing on academic research to demonstrate the technology’s potential harm. However, the revelation of fabricated citations undermined its credibility and fueled skepticism about the state’s case.

Where Does This Leave the Case?

Hancock has yet to respond to media inquiries, including The Verge’s. Neither his team nor Attorney General Ellison has explained how the alleged hallucinations entered the affidavit.

The controversy underscores the challenges of navigating AI-related issues in high-stakes settings like litigation. As AI tools become more ubiquitous, their potential for both utility and error grows. What happens when a tool designed to mimic human intelligence inadvertently sabotages its credibility?

Experts warn that the Hancock incident could be a cautionary tale for lawyers, academics, and policymakers alike. “This is a wake-up call for anyone relying on AI in professional or academic contexts,” says Dr. Amelia Banks, a technology ethicist. “If you don’t verify the output, it can come back to haunt you.”

As the lawsuit progresses, the incident raises broader questions about using AI in government and legal proceedings. How can regulators ensure its accuracy if AI-generated content can inadvertently influence policy? And if AI tools are prone to errors like hallucinations, what safeguards are needed to prevent such mishaps?

The controversy could weaken the state’s position in defending its deepfake law in the short term. The plaintiffs’ legal team already uses the affidavit as evidence of unreliable arguments supporting the legislation.

In the long term, however, the case may highlight the urgency of creating clearer ethical and procedural guidelines for AI use in sensitive contexts. Hancock’s affidavit, whether entirely AI-written or merely influenced by it, shows how quickly AI can become both a tool and a liability.

A Paradox for the AI Age

Ultimately, the Minnesota lawsuit is emblematic of AI’s paradoxical nature. The technology being scrutinized for its potential misuse in elections may have inadvertently sabotaged its legal defense.

As Minnesota lawmakers and legal experts grapple with this high-tech twist, one thing is certain: the rise of AI is reshaping not just our politics but also our courtrooms. Whether this transformation will lead to greater accountability or more confusion remains an open question that may well define the digital age.

Leave A Comment

Fields (*) Mark are Required