FAQs on the Authors Guild’s Positions and Advocacy Around Generative AI

1)

The Authors Guild believes that it is inherently unfair to use and incorporate books, articles, and other copyrighted works in the fabric of AI technologies without the author’s consent, compensation, or credit. While generative AI technologies capable of generating text and other content can be useful tools for writers, guardrails around their development and use are urgently needed to protect the writing profession and literary culture. There is a serious risk of market dilution from machine-generated works that can be cheaply mass-produced and inevitably lower the value of human-authored works. We need to safeguard the incentives that fuel the creation of a rich and diverse literary culture, so vital to our democratic culture that they are inscribed in the Constitution.

2)

The Authors Guild is specifically lobbying for laws, regulations, and policies regarding:

  • Consent: Require permission for the use of writers’ works in generative AI;
  • Compensation: Compensate authors who wish to allow their works to be used in the “training” of generative AI;
  • Transparency: Create transparency obligations for AI developers to disclose what works they use to “train” their AI;
  • Use in outputs: Require permission and establish compensation for authors when their works are used in outputs, or when their names or identities or titles of their works are used in prompts—whether through adding a new economic right under copyright law or as a sui generis right, and/or through a broad, well-articulated federal right of publicity law;
  • Label AI-generated content: Require authors, publishers, platforms, and marketplaces to label AI-generated works and otherwise identify when a significant portion of a written work has been generated by AI.

In addition to lobbying for these legal and regulatory changes, the Authors Guild strongly opposes efforts to deem AI-generated content protectible under copyright, or the creation of a limited sui generis right. The Authors Guild believes that giving AI-generated content protection under existing copyright rules or under a new right exacerbates the threat of AI-generated content flooding the markets.

Note: We use the term “train” to refer to AI developers’ use of pre-existing works in developing their AI only because it has become the standard shorthand. That said, we have reservations about the semantics of the word because it makes the use of works sound like a one-time use and serves to anthropomorphize machines—as if they are simply “reading” or “observing” texts and other works. The reality is that the works actually are used to build the AI and remain part of its fabric. There is no generative AI without the material—mostly in-copyright works—that AI is so-called “trained” on.

3)

It is not efficient or even practicable for AI companies to seek licenses from each individual author who owns the rights to their works. So, the Authors Guild is proposing to create a collective license whereby a collective management organization (CMO) would license out rights on behalf of authors who chose to register with the CMO, negotiate fees with the AI companies with the limits set by the authors, and then distribute those payment to authors. These licenses could cover past uses of books, articles, and other works in AI systems, as well as future uses. The latter would not be licensed without a specific opt-in from the author or other rights holder.

Collective licensing is an established concept and an effective means of paying creators and publishers where licensing creates market inefficiencies. For many years now, the Authors Registry and the Authors Coalition of America have distributed royalties received from foreign collective licenses to U.S. authors.

4)

One or more collective rights management organizations (CMO) will have to be established or an existing one—e.g. the Authors Registry or the Copyright Clearance Center (CCC)—will be augmented for purposes of providing licenses with AI companies and distributing amounts collected to writers.

The CMO would demand fair and reasonable compensation for the use of texts where the rights are owned by the writers, likely on an annual basis.

5)

The CMO would represent the interests only of “professional” writers and only for the use of books and articles—not, say, for every social media post, etc., that anyone has written. Proof of publication or membership in any professional organization for writers, for instance, could make one eligible for representation by the CMO.

6)

Licenses for uses that have already occurred

Once works have been used to “train” AI, they become part of the fabric of the AI and cannot be effectively removed. For the past “training” of what are called the “foundational” models—such as OpenAI’s GPT, Google’s LLaMDA and PaLM, and Meta’s LLaMA—the CMO will seek compensation on an annual basis going forward for as long as those foundational models are in use. “Training” AI on pre-existing works is not a one-time event; the works are constantly being used to continue to “train” the AI. As such, payment should continue as long as the AI model is in use. The collective licensing system will be opt-in and only those works enrolled in the license will be authorized for training and entitled to compensation.

Licenses for future uses

The CMO would also offer licenses to AI companies for text-based works moving forward,, as described above, on an opt-in basis—meaning only on behalf of those who specifically authorize the CMO to offer licenses on their behalf.

7)

We believe that text-generating AI technologies would not exist without the works they were “trained on,” and we are determined to get compensation that reflects these contributions, not just pennies on the dollar. It is important to bear in mind that, unlike Spotify, the license will be for a subsidiary (not primary) use and that any fee charged to the AI companies will be divvied up among all participating authors and publishers. The amount should be significant enough that all authors whose works were used feel the benefit from it. The CMO will consult with experts to value the use of the works in the AI systems, and the CMO will negotiate rates accordingly.

8)

In addition to compensating writers for the use of their works in “training” AI, AI companies should prevent use of creators’ names, their writings (or portions thereof), the titles of their work, or characters and detailed plot lines from their work in prompts without the creators’ express permission. And where writers permit this use, they need to be compensated.

The CMOs could license those rights and collect and distribute the fees on behalf of the writers who wish to permit the use and be compensated, making it possible for the writers to earn additional income.

It would be helpful to create a new economic right, whether under copyright law or as a sui generis right, to ensure that AI companies obtain permissions for these kinds of uses. A well-articulated federal right of publicity law—for which the Authors Guild is lobbying—would also help.

9)

As the collective licensing system will be opt-in, authors could simply withhold opting-in.

10)

Once a work is used to “train” AI, it is part of the program and for now cannot simply be extracted. So, opting out of training that has already occurred is not really an option, nor are courts or Congress likely to tell the foundational LLM model owners that they need to start over. That is why the Authors Guild is seeking compensation for the use of authors’ works in training existing AI. We strongly believe that permission should be obtained first, and do not want to condone the use that has already occurred, but we also know that we can’t wind back the clock. We want to make sure that authors are paid for use of their works in existing LLM models.

11)

On September 19, 2023, the Authors Guild and 17 authors filed a class-action suit against OpenAI in the Southern District of New York for copyright infringement of their works of fiction on behalf of a class of fiction writers whose works have been used to train GPT. Read more about the lawsuit and the reasons for it here.

12)

Many authors recently discovered that their books were used without permission to “train” AI systems. Here’s what you need to know if your books are in the Books3 dataset, as well as actions you can take now to speak out in defense of your rights.

13)

The risks to the writing profession from generative AI technologies require a multi-faceted response. Collective licensing does not address all of these risks, but does give us a starting point to give authors control over uses of their works and put money back into their pockets.

As part of its advocacy, as noted above, the Authors Guild is also asking Congress to:

  • Require AI-generated content to be labeled as such. This will prevent AI-generated content from being passed off as human written, and consumers have the right to know.
  • Require AI companies to disclose what copyrighted materials they used to “train” their AI.
  • Creation of a well-articulated federal right of publicity law that would give creators the right to sue for unauthorized use of their names or other identifying information in prompts and AI outputs.

In addition to our advocacy and lobbying around AI, the Authors Guild is doing the following:

  • Contract clauses: The Authors Guild has released new contract clauses that aim to prevent the use of books in “training” generative AI without an author’s express permission. In addition, the Authors Guild’s new clauses require publishers to get an author’s written consent before using AI-generated book translations, audiobook narration, or cover art. The Guild’s Model Trade Book Contract and Literary Translation Model Contract have been updated to include the new clauses, and the Guild is encouraging authors and agents to ask for their inclusion in contracts with publishers, and urging publishers to adopt the clauses.
  • Educational programs: The Authors Guild is expanding its educational programs to cover issues created by generative AI technologies, but also to equip authors with the skills to utilize and take advantage of these technologies. In upcoming webinars, it will show how authors can incorporate generative AI technologies into their writing process, effectively and ethically.
  • Best practices for authors and publishers: Some issues created by generative AI can’t be addressed through legislation or regulation—these require a commitment among authors and publishers to use the technology responsibly. It is incumbent on all members of the writing and publishing industry to be thoughtful and transparent about using generative AI. The Authors Guild is working on establishing ethical guidelines and best practices to help authors and publishers navigate the terrain.