AI-Generated Text Raises Questions in Minnesota Deepfake Law Case

Bizbooq

Bizbooq

November 22, 2024 · 4 min read
AI-Generated Text Raises Questions in Minnesota Deepfake Law Case

A federal lawsuit over Minnesota's "Use of Deep Fake Technology to Influence An Election" law has taken a surprising turn, with an affidavit submitted in support of the law appearing to contain AI-generated text. The development has raised concerns about the influence of AI in legal proceedings and the potential for misinformation to shape policy decisions.

The affidavit in question was submitted by Jeff Hancock, founding director of the Stanford Social Media Lab, at the request of Attorney General Keith Ellison. However, lawyers challenging the law have pointed out that the document cites non-existent sources, including a 2023 study published in the Journal of Information Technology & Politics titled "The Influence of Deepfake Videos on Political Attitudes and Behavior." Despite thorough searches, no record of this study or another cited source, "Deepfakes and the Illusion of Authenticity: Cognitive Processes Behind Misinformation Acceptance," could be found.

The lawyers have suggested that the citations bear the hallmarks of being an artificial intelligence (AI) "hallucination," generated by a large language model like ChatGPT. This has led to questions about how the affidavit came to include these fictional sources and whether Hancock was aware of the AI's involvement. Hancock has not responded to requests for comment, leaving the circumstances surrounding the affidavit unclear.

The implications of this development are significant, as it highlights the potential for AI-generated text to influence legal proceedings and policy decisions. If AI-generated text can be presented as factual evidence, it raises concerns about the integrity of the legal system and the potential for misinformation to shape policy decisions. Furthermore, it underscores the need for greater transparency and accountability in the use of AI in legal proceedings.

This is not the first instance of AI-generated text being used in legal proceedings. Recently, a lawyer was forced to answer for using ChatGPT to generate citations, which were later found to be bogus. The incident highlights the need for greater awareness and scrutiny of AI-generated text in legal proceedings.

The Minnesota deepfake law, which prohibits the use of deepfake technology to influence elections, is a timely and important issue. The use of deepfakes has the potential to significantly impact the democratic process, and it is essential that policymakers and legal professionals are aware of the risks and challenges associated with this technology. However, the use of AI-generated text in legal proceedings undermines the integrity of the legal system and raises concerns about the potential for misinformation to shape policy decisions.

As the use of AI continues to grow and evolve, it is essential that policymakers, legal professionals, and the public are aware of the risks and challenges associated with AI-generated text. This incident serves as a warning about the potential for AI-generated text to influence legal proceedings and highlights the need for greater transparency and accountability in the use of AI in legal proceedings.

In conclusion, the use of AI-generated text in the Minnesota deepfake law case raises significant concerns about the integrity of the legal system and the potential for misinformation to shape policy decisions. As the use of AI continues to grow and evolve, it is essential that policymakers, legal professionals, and the public are aware of the risks and challenges associated with AI-generated text and take steps to ensure transparency and accountability in the use of AI in legal proceedings.

Similiar Posts

Copyright © 2023 Starfolk. All rights reserved.