All News

Industry & Advocacy News

Authors Are Suing Grammarly—and the Stakes Go Far Beyond One App

Green film grain background with Grammarly app logo set to the header text claiming "Authors Are Suing Grammarly" and "The Stakes Go Far Beyond One App"

Grammarly, an AI-powered writing and editing tool used by millions, is facing a class action lawsuit on behalf of hundreds of authors whose names were used by Grammarly without their consent.

Grammarly launched a feature called “Expert Review” in August 2025, which provided AI-generated editing suggestions purportedly from various high-profile writers, both living and dead, including Stephen King, Neil deGrasse Tyson, Kara Swisher, and Carl Sagan. For example, if a user input a draft news article, a series of pop-ups would offer writing tips that, at least according to Grammarly’s AI, might be offered by well-known journalists in that field.

When those whose names—and presumably work—were used found out, they were outraged and complained to the company, which recently discontinued the program. The problem was that none of these writers ever consented to having their names used in this way. And in many cases, the advice attributed to them was not something the real author would ever say, which could cause real harm to their professional reputations.

Investigative reporter Julia Angwin described how the advice attributed to her included a suggestion that the user incorporate an anecdote involving a fictional person. That suggestion, she notes, “is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).”

The Lawsuit

This month, Angwin filed a class action lawsuit (pdf link) in federal court in New York challenging “Grammarly’s misappropriation of the names and identities of hundreds of journalists, authors, writers, and editors to earn profits for Grammarly and its owner, Superhuman.” The suit asserts that Grammarly’s actions violate New York’s and California’s right of publicity laws, which prohibit the use of someone’s name or image for commercial purposes without their consent.

A Pattern of Exploitation

This is a second slap in the face—and it is disastrous for writers if not stopped. AI companies not only stole authors’ works to create their machines, but now they are allowing their AIs to be used to rob any possibility of much-needed future earnings from the new tools. 

More broadly, Grammarly’s misguided effort to monetize authors’ names and reputations is the latest reported attempt by tech companies to exploit the life’s work of creative professionals. Grammarly’s AI features use GPT, which, like all of the major foundational language models, was developed by ingesting vast numbers of authors’ works downloaded from pirate sites without permission. Now tech companies, large and small, are using those illegally and unethically trained models to further squeeze value out of writers’ labor without compensation. As the Guild has been arguing for years, the use of AI models to generate knockoff materials in the style of particular authors poses a severe threat to the writing profession. The injury is compounded exponentially when companies like Grammarly misappropriate not only the authors’ work but also their name and reputation.

Because the major large language models were all trained on millions of books without licenses—and without the kinds of restrictions that would naturally come with those licenses, such as prohibiting outputs that recognizably include a writer’s works or name—they can be used by third party businesses to reproduce and use authors’ works in ways that infringe their rights and devalue their work. These rob authors of the ability to earn the additional income they should be entitled to from licensing out such uses. For instance, Amazon recently launched “Ask this Book,” an AI feature that allows a reader to converse with a book in the same way as the AI-enhanced books we were foreseeing would become a new market and source of earnings for authors and publishers. They can do this because they are relying on an underlying AI model that was trained on the books.

To be clear, the Authors Guild is not opposed to innovative uses of AI technology to provide useful tools to the public or help writers find new audiences—and indeed we were excited about the possibility of new income for writers from licensing their work for new AI-enabled uses—but meanwhile new businesses are jumping in and usurping those potential income streams without permission or pay, because they can: because the foundational models know the writers’ works and can produce the desired outputs, all without having to involve the author or publisher.

But authors must have the right to share in the benefits of AI-enabled platforms, which could not exist without the books, articles, and other works to which they have devoted their lives to creating.

What the Law Must Do

The law must ensure that authors have sufficient tools to remedy these violations. While most states have right of publicity laws, the Guild has long supported a federal law to ensure that authors’ remedies do not depend on the state in which they happen to live. And while we support recent legislative proposals like the No Fakes Act to prohibit the use of digital replicas without consent, we have urged Congress to strengthen those approaches by adding express protections against unauthorized uses of authors’ names.

The Authors Guild applauds Angwin and her attorneys for bringing this important action. As Angwin herself made clear, the stakes could not be higher for any writer who has devoted their career to the truth. We stand ready to assist in any way we can.