Radio Host Sues OpenAI for Libel After ChatGPT Accuses Him of Crime
Georgia radio host Mark Walters is suing OpenAI after its massively popular ChatGPT accused him of embezzlement in the precedent-setting case The Second Amendment Foundation v. Robert Ferguson. The catch? Walters is not named in that case, nor has he ever worked for the Second Amendment Foundation.
«OpenAI defamed my client and made up outrageous lies about him,» Mark Walters’ attorney John Monroe told Decrypt, adding that there was no choice but to file the complaint against the AI developer. «[ChatGPT] said [Walters] was the person in the lawsuit and he wasn’t.»
Documents filed in the Superior Court of Gwinnett County, Georgia, claim ChatGPT responded to an inquiry by journalist Fred Riehl, giving the chatbot a URL pointing to the SAF v. Ferguson case and asking for a summary. The chatbot erroneously named Mark Walters as the defendant, the complaint says.
ChatGPT allegedly generated text saying the case “[i]s a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF,» The text also claimed that Walters allegedly misappropriated funds for personal expenses.
Riehl reached out to Gottlieb about the response, who said the statement made by ChatGPT was false, the court document said.
Walters is demanding a jury trial, unspecified general and punitive damages, and attorney’s fees.
While lawsuits against AI developers are still a new legal territory, Monroe is confident his client will win.
«We wouldn’t have brought the case if we didn’t think we were going to be successful,» he said.
But others are not as confident.
«For most claims of defamation within the United States, you have to prove damages,» Cal Evans, in-house counsel for Stonehouse Technology Group, told Decrypt.
«Although the suit references the ‘hallucinations,’ it is not an individual communicating facts; it is software that correlates and communicates information on the internet,» Evans said.
AI hallucinations refer to instances when an AI generates untrue results not backed by real-world data. AI hallucinations can be false content, news, or information about people, events, or facts.
In its ChatGPT interface, OpenAI adds a disclaimer to the chatbot that reads, «ChatGPT may produce inaccurate information about people, places, or facts.»
«It is possible that [OpenAI] can cite that they are not responsible for the content on their site,» Evans said. «The information is taken from the public domain so already out in the public.»
In April, Jonathan Turley, a U.S. criminal defense attorney and law professor, claimed that ChatGPT accused him of committing sexual assault. Worse, the AI made up and cited a Washington Post article to substantiate the claim.
This «hallunication» episode was followed in May when Steven A. Schwartz, a lawyer in Mata v. Avianca Airlines, admitted to «consulting» the chatbot as a source when conducting research. The problem? The results ChatGPT provided Schwartz were all fabricated.
«That is the fault of the affiant, in not confirming the sources provided by ChatGPT of the legal opinions it provided,» Schwartz wrote in the affidavit submitted to the court.
In May, OpenAI announced new training that the company hopes would deal with the chatbot’s habit of hallucinating answers.
«Mitigating hallucinations is a critical step towards building aligned AGI,» OpenAI said in a post.
OpenAI has not yet responded to Decrypt’s request for comment.