Consultation For Your Reputation

AI and Defamation

On Behalf of | Mar 25, 2024 | Internet Defamation |

As AI gains in both popularity and notoriety, the legal implications of this technology have begun to arise. It’s been found that, when entering queries about well-known individuals, ChatGPT may generate information that is false. Because of this, some legal experts believe that AI companies should be held responsible for any defamatory content they produce.

Though Section 230 of the Communications Decency Act protects internet companies from legal liability for content posted by their users, many believe that the law should not apply to AI publishers.

How Does AI Produce False Content?

AI publishers are powered by ‘large language models’—tools that draw upon structures and patterns of language datasets in order to produce narrative text in response to a user’s query. What has become evident, however, is that AI technology can also produce what are known as ‘hallucinations’. These ‘hallucinated’ bits of information may sound factual, but are not actually based on fact.

For example, in 2023, two attorneys used generative AI to prepare their legal briefs, but later discovered that the AI-generated caselaw citations were completely fabricated.

Another example of a published ‘hallucination’ has led to the first case of its kind to be litigated in the federal court in the Northern District of Georgia. In 2023, radio personality, Mark Walters, sued OpenAI for defamation. See Walters v. OpenAI, LLC, No. 1:23-cv-03122 (N.D. Ga.). Walters alleges that ChatGPT produced false facts about him, which stated that he embezzled money from a gun-rights organization. This outcome of this case is yet to be determined.

Does AI Have a Legal Leg to Stand On?

Some believe that AI companies have a credible defense against litigation due to a number of factors. Some of these factors include:

1.) ‘Hallucinations’ do not result from human choice, and thus, cannot meet the “actual malice” or “reckless disregard” requirements for defamation claims.

2.) Generative AI is experimental by its very nature. AI publishers state that any persons using their service should verify accuracy of the content before relying on it in any way. With this understanding, defendants can argue that no reasonable person should take AI-generated content as a “statement of fact”; therefore, AI content cannot be taken as libelous.

3.) AI technologies also often provide disclaimers that information generated may be inaccurate; therefore users must ultimately take responsibility for the content being published.

4.) AI programs do not publish statements; they only create content that a user can choose to publish or not publish.

5.) Any allegedly false information produced by AI technology is likely the product of previously published material that is contained in the model’s training dataset.

Should AI Companies Be Held Responsible for Defamatory Content?

Some argue that if an AI company is alerted to the fact that their program is generating specific false and libelous content and they take no action, they are acting with “reckless disregard for the truth.” The companies may also be liable for negligence if there are flaws in the product design that lead to the production of defamatory content that causes harm to an individual.

One thing is clear: upholding defamation law in this new and ever-changing digital landscape won’t be an easy task. Though Section 230 has allowed for innovation and growth in the digital realm, the lack of guardrails continues to be a problem.

Will AI companies ultimately be held responsible for false and defamatory content they produce, or will they be afforded the same protections that other internet companies are given under Section 230?

Only time will tell.