Guide
Navigate issues related to artificial intelligence technologies and their implications for the writing profession.
April 9, 2024
These best practices cover AI issues authors may encounter in the writing and publishing process. This includes best practices around using AI tools to assist writing, what disclosures are needed to publishers and readers if AI is used, and more.
We encourage authors to review and adopt these suggested best practices around AI where relevant.
To learn more about the Authors Guild’s policy positions and advocacy efforts around AI and authors’ rights, please see our artificial intelligence FAQs.
Whether or not you use generative AI as a tool in your writing is a personal decision. However, we believe that authors should observe ethical ground rules given the serious threats that widespread, unaccountable use of generative AI technology has raised for the writing profession. While AI could be beneficial as a tool for authors, for now, the main LLMs are based on infringement, and it is in the professional interest of all writers to commit to maintaining standards that allow all writers to practice their vocation successfully.
Below are our suggestions for using generative AI ethically:
If you use text generated by AI in your manuscript, you need to disclose it to your publisher. Under most publishing contracts, authors represent and warrant that the work submitted will be original to them. The inclusion of more than de minimus AI-generated text in the final manuscript will violate this warranty, as the text is not considered “original” to the author. Similarly, an entirely AI-generated plotline or wholesale adoption of AI-generated characters may violate this term of the contract. It is important to know that any expressive elements generated by AI that you incorporate in your work are not protected by copyright and need to be disclaimed in the application for registration.
As such, if you contemplate using AI-generated elements in your work (other than spell-checking, grammar, or minor elements), you should discuss your AI use with your agent, if you have one, as well as your publisher, so they can amend/waive the warranty in your contract or otherwise tailor the terms appropriately. Publishers also need to know if anything is not copyrightable in the book (AI-generated material is not copyrightable), so they can register it properly and know how to best protect your book. Many publishers are developing specific rules around authors’ use of AI, so you should ask your editor if your publisher has any special guidance and carefully review any rules.
If you use AI in your writing, you need to disclose it to your readers. As noted above, the ethical use of AI in writing books and journalism requires disclosure to the reading public whenever substantial portions or elements of a work are AI-generated. For instance, if an appreciable (significant) amount of AI-generated text and content are incorporated in a manuscript with minimal revision, or if AI is used substantially to generate plot and characters (for instance, if you use AI to generate a detailed outline of plot, setting, and characters and follow that outline in your writing), that should be disclosed in some manner in the book or article. The disclosure could be made by including the AI as an “author”; in the front material, introduction, or acknowledgements of a book; at the bottom of an article; or as part of the byline for an article.
We believe that disclosure of substantially AI-generated content is important for the continued vitality of our literary culture, and we are lobbying for legislation that would make disclosures of AI-generated content mandatory.
If you publish a book using KDP, you need to disclose AI use to Amazon. Under current Amazon terms, you need to disclose “AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP.” Amazon defines AI-generated content as “text, images, or translations created by an AI-based tool,” and requires disclosure even if the content was substantially edited. Amazon does not require disclosure when the AI is used as a tool to “edit, refine, error-check, or otherwise improve” content that you created. Amazon is not making disclosures of AI-generated content public as of the last edit of these guidelines, but we hope they will change this policy in the future to ensure that consumers are aware when they purchase an AI-generated book.
Authors “fine-tuning” AI models on their own works and using the fine-tuned AI model to generate new material (e.g. a new book in a series, a new book in their own style) arguably raises fewer ethical concerns since the expression being generated is based on authors’ own work rather than the work of others. That being said, the “fine-tuning” is done on top of a foundational large language model that in all likelihood was trained and developed on mass copyright infringement. The foundational LLMs available to the public today were all developed from unauthorized mass copying. That may change soon as new companies enter the field or existing ones start licensing; and the new “fairly trained certification”* will allow you to know which LLMs are not infringing, but for now, please consider the harm to the total ecosystem when using generative AI. The Authors Guild is working toward licensed and controlled use of authors’ work, and once we have LLMs that are trained wholly on licensed material, this will not be a concern. Further, as an ethical matter, we believe that disclosure of AI use is still warranted when you input your own work to fine-tune AI in order to create something in your own style.
*The Authors Guild is a supporter of Fairly Trained.
In an earlier version of this best practice, we referred to fine-tuning an AI model as training an AI model, which led to confusion as to whether we were suggesting that authors can train their own LLMs. At this time, it’s not possible for authors to train an LLM themselves, since the computing resources required are enormous. Fine-tuning is the process by which a smaller AI model is created on specific datasets with specific functionalities to work with a foundational LLM.
Whether or not an author chooses to use AI to generate cover art or narrate an audiobook is a personal decision. However, authors should be mindful of the impact of generative AI on their peers in the creative industries. Many image and voice AI models are built using unlicensed pictures and artwork, though there are exceptions, such as Adobe Firefly, which use licensed images for training data. If you are going to use an AI to create cover art or generate an audiobook, it is better to use an AI program or service that uses licensed content, as opposed to one that is built on copyright infringement.
The Authors Guild represents translators as well as authors, and we are deeply concerned about the impact on the profession from AI tools. We encourage authors to continue to work with and support human translators whenever possible instead of relying on AI tools.
We have drafted a model clause that authors and agents can use in their negotiations that prohibit the use of an author’s work for training AI technologies without the author’s express permission. Many publishers are agreeing to this restriction, and we hope this will become the industry standard.
Keep in mind, however, that this clause is only intended to apply to the use of an author’s work to train AI, not to prohibit publishers from using AI to perform common tasks such as proofing, editing, or generating marketing copy. As expected, publishers are starting to explore using AI as a tool in the usual course of their operations, including editorial and marketing uses, so they may not agree to contractual language disclaiming AI use generally. Those types of internal, operational uses are very different from using the work to train AI that can create similar works or to license the work to an AI company to develop new AI models. The internal, operational uses of AI don’t raise the same concerns of authors’ works being used to create technologies capable of generating competing works.
We have recommended clauses in which publishers agree not to use AI to translate, produce cover art, or narrate an audiobook without the author’s permission. While we have heard that some publishers are rejecting an outright prohibiting of AI use to create translations, cover art, and audiobooks, publishers are sometimes granting authors a right of approval over the translator, design, and narrator of their book, which effectively gives authors control over rejecting AI translation and narration.
Members Only
Article
Artificial Intelligence and the Future of Literary Works
How Will Authorship Be Defined in an AI Future?
AI for Writers: A Threat or a Boon?