All Resources

Guide

AI Best Practices for Authors

Navigate issues related to artificial intelligence technologies and their implications for the writing profession.

AI Best Practices for Authors

The Authors Guild recommends the following best practices for authors using artificial intelligence in the writing and publishing process. Topics include using AI tools to assist writing, disclosing if AI is used to publishers and readers, and more.

To learn more about our policy positions and advocacy efforts around AI and authors’ rights, please see our artificial intelligence FAQs.

Generative AI and the Writing Process

Generative AI is a ubiquitous technology these days, and writers are already using it in various ways as a tool or an aid in the writing process. For instance, some writers use generative AI technology to research, outline, brainstorm, and even as a writer partner and to generate characters or text to include in their manuscripts.

If writers choose to use generative AI, they should be aware of and observe some ethical ground rules to protect both their own personal and professional interests and the future of their profession, given that unauthorized, unrestricted, and uncompensated use of authors’ works to train generative AI has created tools that are used to displace professional writers and create a serious risk of flooding markets and diluting the value of human-written work.

For starters, please be aware that, for now, all of the major large language models (LLMs)— generative AI for text—are based on hundreds of thousands or more books and countless articles stolen from pirate websites. This is the largest mass copyright infringement of authors’ works ever, and it was done by some of the richest companies in the world. It is theft—a transfer of wealth from middle-class creators to the coffers of billionaires—and we are fighting against it.

As such, we do not condone any use of unlicensed LLMs in the regular course of writing until the AI companies do the right thing and license the books and journalism they use to train their AI. Licensing is how copyright works: It enables creators to charge money for the use of their work and insist on certain limits and restrictions (such as preventing competing outputs). It is in all of our professional interests to insist on licensing, compensation, and control and to maintain standards that promote a fair marketplace.

We believe that licensing—not theft—will increasingly become the norm as new companies enter the field or existing ones start licensing; and the new “fairly trained certification”—which the Authors Guild is a supporter of—will allow you to know which LLMs are not infringing. Until then, please consider the harm to the total ecosystem when using generative AI.

Using Generative AI Ethically

Below are our recommended best practices and explanations for using generative AI ethically:

  1. Do not use AI to write for you. Use it only as a tool— a paintbrush for writing. It is your writing, thinking, and voice that make you the writer you are. AI-generated text is not your authorship and not your voice. Even if trained on your own work, AI-generated text is simply a regurgitation of what it is trained on and adds nothing new or original to the world. By definition, it is neither original nor art. When you use AI to generate text that you include in a work, you are not writing—you are prompting. Choosing to be a professional prompter is not the same as being a writer, and the output is not authorship or creative. Use AI to support, not replace, the creative process.
  2. If you do use AI to develop story lines or character or to generate text, be sure to rewrite it in your own voice before adopting it. If you are claiming authorship, then you should be the author of your work.
  3. If you incorporate AI-generated text, characters, or plot in your manuscript, you must disclose it to your publisher as publishing contracts require the authors to represent and warrant that the manuscript is original to the author. AI-generated material is not considered “original” to you and it is not copyrightable. Inclusion of more than a very minimal amount of AI-generated text in the final manuscript will violate your warranty to the publisher. Similarly, an entirely AI-generated plotline or wholesale adoption of AI-generated characters may violate this term of the contract. It is important to know that any expressive elements generated by AI that you incorporate in your work are not protected by copyright and need to be disclaimed in the application for registration. Such material must also be disclaimed in the application for copyright registration, and your publisher needs that information to register the copyright correctly. If you contemplate using AI-generated material in your work (other than minor editorial changes as a result of grammar or spell-checking), you should discuss it with your publisher and see if they will waive the warranty.
  4. You should also disclose to the reader whether you incorporated any AI-generated content in the book. They have a right to know as many will feel duped if they are not advised. It is not necessary though to disclose use of generative AI tools like grammar check or when it is employed merely as a tool for brainstorming, idea generation, researching, or for copyediting. But if an appreciable (significant) amount of AI-generated text and content are incorporated in a manuscript with minimal revision, that should be disclosed in some manner. The disclosure could be made by including the AI as an “author”; in the front material, introduction, or acknowledgements of a book; at the bottom of an article; or as part of the byline for an article.
  5. Be aware and mindful of publisher and platform-specific policies regarding AI use. Many publishers are developing specific rules around authors’ use of AI, so you should ask your editor if your publisher has any special guidance and carefully review any rules. If you publish a book using KDP, you need to disclose AI use to Amazon. Under current Amazon terms, you need to disclose “AI-generated content (text, images, or translations) when you publish a new book or make edits to and republish an existing book through KDP.” Amazon defines AI-generated content as “text, images, or translations created by an AI-based tool,” and requires disclosure even if the content was substantially edited. Amazon does not require disclosure [for “AI-assisted”] works – when the AI is used as a tool to “edit, refine, error-check, or otherwise improve” content that you created. Amazon is not making these disclosures of AI-generated content public as of the last edit of these guidelines, but we hope they will change this policy in the future.
  6. Use the Authors Guild’s Human Authored Certification mark for books that contain no AI-generated text as a way to let readers know it was entirely human written. Readers will appreciate knowing.
  7. Respect the rights of other writers when using generative AI technologies, including copyrights, trademarks, and other rights, and do not use generative AI to copy or mimic the unique styles, voices, or other distinctive attributes of other writers’ works in ways that harm the works. (Note: doing so could also be subject to claims of unfair competition or infringement).
  8. Thoroughly review and fact-check all content generated by AI systems. As of now, you cannot trust the accuracy of any factual information provided by generative AI. All Chatbots now available make information up – or hallucinate. They are text-completion tools, not information tools. Also, be aware and check for potential biases in the AI output, be they gender, racial, socioeconomic, or other biases that could perpetuate harmful stereotypes or misinformation.
  9. “Fine-tuning” an AI model on your own work to generate new material (e.g. a new book in a series, a new book in their own style) arguably raises fewer ethical concerns since the expression being generated is based on authors’ own work rather than the work of others. (Fine-tuning is the process by which a smaller AI model is created on specific datasets with specific functionalities to work with a foundational LLM.) That being said, the “fine-tuning” is done on top of a foundational large language model that in all likelihood was trained and developed on mass copyright infringement. Further, as an ethical matter, we believe that disclosure of AI use is still warranted when you input your own work to fine-tune AI in order to create something in your own style.
  10. Show solidarity with and support professional creators in other fields, including voice actors and narrators, translators, illustrators, etc., as they also need to protect their professions from generative AI uses. If you choose to use AI to generate cover art, illustrations, be mindful of the impact of generative AI on their peers in the creative industries. Many image models are built using unlicensed pictures and artwork, though there are exceptions, such as Adobe Firefly, which use licensed images for training data. Similarly, while many voice models are built on unlicensed recordings, Amazon, Audible and other audiobook platforms are using licensed digital “voice replicas” of actors, ensuring that the narrators get paid. If you are going to use an AI to create cover art or generate an audiobook, it is better to use an AI program or service that uses licensed content, as opposed to one that is built on copyright infringement.The Authors Guild represents translators as well as authors, and we are deeply concerned about the impact on the profession from AI tools. We have recommended model contract clauses to prevent publishers from using AI translation, cover design, and narration without the author’s approval. We encourage authors to continue to work with and support human translators whenever possible instead of relying on AI tools.
  11. Assert your rights in your contract negotiations with publishers and platforms. We have drafted a model clause that authors and agents can use in their negotiations that prohibit the use of an author’s work for training AI technologies without the author’s express permission. Many publishers are agreeing to this restriction, and we hope this will become the industry standard. Keep in mind, however, that this clause is only intended to apply to the use of an author’s work to train AI, not to prohibit publishers from using AI to perform common tasks such as proofing, editing, or generating marketing copy. As expected, publishers are starting to explore using AI as a tool in the usual course of their operations, including editorial and marketing uses, so they may not agree to contractual language disclaiming AI use generally. Those types of internal, operational uses are very different from using the work to train AI that can create similar works or to license the work to an AI company to develop new AI models. The internal, operational uses of AI don’t raise the same concerns of authors’ works being used to create technologies capable of generating competing works.

Revised May 19, 2025