Litigation

The Many Lawsuits of ChatGPT, Part I: Inaccurate or Harmful Information

ChatGPT has demonstrated that generative AI technology has the power to revolutionize the way we work, interact, and communicate. But with great power comes not-so-great litigation.
The Many Lawsuits of ChatGPT, Part I: Inaccurate or Harmful Information

ChatGPT has demonstrated that generative AI technology has the power to revolutionize the way we work, interact, and communicate. But with great power comes not-so-great litigation. The various types of claims pending against OpenAI, the company that developed ChatGPT, range from defamation to copyright infringement to breach of open source software licenses to invasion of privacy.

Why does OpenAI keep getting sued? The easy answer is that OpenAI is an increasingly prominent company with a very popular product bankrolled by a deep-pocketed technology behemoth (i.e., Microsoft). That alone makes it a target for lawsuits. The complex answer (which prompts more complex questions, no pun intended) is that OpenAI’s technology raises some novel legal issues for which there are few, if any, direct precedents.

In this series of articles, we’ll provide an overview of the various types of claims OpenAI faces and the questions raised by them.

When Keeping it Not-So-Real Goes Wrong

A reporter used ChatGPT to research an article he was writing about a lawsuit involving a particular non-profit group. The reporter had asked ChatGPT to summarize the complaint. ChatGPT provided a lengthy response claiming that the lawsuit accused one Mark Walters of embezzling funds while he served as the group’s treasurer.

In reality, Walters had never been employed by the group, had never been accused of embezzlement, and was not named or involved in the lawsuit. In fact, the lawsuit had nothing at all to do with embezzlement.

The reporter asked ChatGPT to reproduce the relevant portion of the complaint that included the allegations against Walters. ChatGPT responded with a completely fabricated paragraph that was nowhere to be found in the actual complaint. The reporter contacted Walters to verify the information provided by ChatGPT. Upon learning of ChatGPT’s false claims from the reporter, Walters sued OpenAI for defamation.

Generatively Unconvincing

To prevail on a defamation claim, the plaintiff must prove that the disparaging statement was “published”, i.e., the statement was intentionally or negligently communicated by the defendant to another person. In this instance, Walters may face an uphill battle in proving that OpenAI acted negligently.

Though it’s unclear how Walters’ name ended up in ChatGPT’s responses, the sheer volume of data needed to train a generative AI model like ChatGPT makes it infeasible for OpenAI to verify every bit of information included in that training data. It’s similarly infeasible for OpenAI to ensure that all of the connections between people, events, and data drawn by ChatGPT are always accurate.

Perhaps for this reason, the ChatGPT chatbot includes a prominent disclaimer informing users that ChatGPT “[m]ay occasionally generate incorrect information” and “[m]ay occasionally produce harmful instructions or biased content.”

Do No Harm… Or At Least Try Not To

However, the Walters case does raise questions about the types of harm that may arise from human reliance on inaccurate or dangerous content provided by ChatGPT. It also invites debate over the responsibility of generative AI companies to minimize or mitigate the occurrence of such content. A disclaimer may not be enough to shield ChatGPT from liability.

In the Walters case, ChatGPT created content that disparaged the plaintiff. It also incorrectly claimed that the content was an excerpt from an actual civil complaint. Walters may argue that OpenAI could easily implement a feature that cross-references a ChatGPT-generated excerpt of a document with the actual document to ensure accuracy. Walters may further argue that OpenAI was negligent in failing to implement such a feature. OpenAI may counter that implementing such a feature is indeed infeasible, as LLMs don’t necessarily retain verbatim copies of content found in their training data.

Such arguments implicate the long-standing conundrum of balancing the need to facilitate and encourage innovation with the need to prevent harm. History has shown us that when a new game-changing technology gains widespread adoption, unanticipated problems follow and often become endemic. This leads to the creation and adoption of rules aimed at solving those problems without compromising the technology’s effectiveness.

For example, the rise of social media led to the phenomenon of fake news and viral misinformation. To combat that problem, social media companies began to develop moderation and automated fact-checking tools, which prompted governments to enact laws that regulate how social media companies deploy such tools. Analogous regimes may emerge to limit the occurrence or impact of misinformation in AI-generated content and to sanction companies that don’t comply.

Ultimately, regardless of their outcomes, lawsuits like the Walters case and the publicity they generate will surely influence the debate among regulators, legislators, and courts over how generative AI should be regulated. The extent to which generative AI companies should be held responsible for the content they generate will be a crucial issue in that debate.

Stay tuned for Part II!

Back to Blog

Who is Dev Legal?

Sabir Ibrahim

Managing Attorney

During his 18-year career as an attorney and technology entrepreneur, Sabir has advised clients ranging from pre-seed startups to Fortune 50 companies on a variety of issues within the intersection of law and technology. He is a former associate at the law firm of Greenberg Traurig, a former corporate counsel at Amazon, and a former senior counsel at Roku. He also founded and managed an IT managed services provider that served professional services firms in California, Oregon, and Texas.

Sabir is also co-founder of Chinstrap Community, a free resource center on commercial open source software (COSS) for entrepreneurs, investors, developers, attorneys, and others interested in open source software entrepreneurship.

Sabir received his BSE in Computer Science from the University of Michigan College of Engineering. He received his JD from the University of Michigan Law School, where he was an article editor of the Michigan Telecommunications & Technology Law Review.

Sabir is licensed to practice in California and before the United States Patent & Trademark Office (USPTO). He is formerly a Certified Information Privacy Professional (CIPP/US).

Sabir Ibrahim, Managing Attorney

What can Dev Legal do for you?

Areas Of Expertise

We aim to advise clients in a manner that minimizes noncompliance risks without compromising operational efficiency or business interests. The areas in which we assist clients, either alone or in collaboration with affiliates, include:

Technology License Agreements

Drafting, reviewing, and negotiating software licenses, SaaS agreements, and other technology contracts.

Open Source Software Matters

License compliance, contribution policies, and open source business strategy.

SaaS Agreements

Subscription agreements, terms of service, and service level agreements for cloud-based services.

Intellectual Property Counseling

Trademark, copyright, and patent strategy for technology companies.

Product Counseling

Legal review of product features, marketing materials, and compliance with regulations.

Terms of Service and Privacy Policies

Creating and updating legal documents for websites and applications.

Assessment of Contractual Requirements

Reviewing obligations and ensuring compliance with complex agreements.

Information Management Policies

Data governance, retention policies, and information security procedures.

Risk Mitigation Strategy

Identifying legal risks and developing strategies to minimize exposure.

Join Our Email Newsletter List And Receive Our Free Compliance Explainer

Our one-page Dev Legal Compliance Explainer is an easy-reference guide to understanding compliance concepts for you or your clients. Our email newsletter includes information about news and recent developments in the technology regulatory landscape and is sent approximately once a month.

Contact Us

Get In Touch

Phone

510.255.3766

Mail

PO Box 721
Union City, CA 94587