U.S. Copyright Office Releases Part 2 of AI Report: What Authors Should Know
February 6, 2025
March 19, 2025
In developing a national Artificial Intelligence (AI) Action Plan—a comprehensive strategy to guide AI innovation and regulation in the United States—the Office of Science and Technology Policy (OSTP) and the Networking and Information Technology Research and Development (NITRD) Program requested comments from interested parties on a broad swath of issues related to AI. The Authors Guild submitted formal comments on issues affecting authors in response.
Our comments emphasize that while AI offers valuable benefits and can serve as a tool for creators, the current AI development models pose significant threats to authors and the creative economy unless guardrails are enforced. We focused on three main concerns:
The major AI companies have built their large language models (LLMs) by scraping and ingesting vast quantities of copyrighted works without authorization or compensation. These include hundreds of thousands of books obtained from pirate websites, plus millions of news articles and other professionally-written content.
This practice represents a massive transfer of wealth from creators to tech companies. The value of any AI system depends directly on the quality of materials used to train it. As OpenAI CEO Sam Altman recently acknowledged when announcing a new AI model “good at creative writing,” these systems can only generate high-quality output because they’ve been trained on professionally written, copyrighted material.
The harm continues on the output side, where AI systems are now being used to flood markets with machine-generated content that directly competes with human authors:
All this comes at a time when writers are already facing unprecedented economic challenges. Between 2009 and 2018, authors’ median incomes dropped 42 percent. Our most recent survey found that the median writing-related income for full-time authors in 2022 was just over $20,000.
AI companies have attempted to defend their unauthorized copying under the “fair use” doctrine—a claim currently being challenged in nearly 40 copyright infringement lawsuits, including a class action led by the Authors Guild.
Rather than supporting new copyright exceptions or compulsory licenses, we urged the Administration to support voluntary, freely negotiated licensing arrangements between copyright owners and AI companies—a market that already exists and is growing rapidly. Publishers such as Wiley, Oxford University Press, and HarperCollins, as well as news organizations including the Associated Press, NewsCorp, and The Atlantic, have already entered into licensing agreements with AI companies. Entities like Created by Humans have entered the marketplace to make it possible for self-published and trade authors who own their AI training rights to license to AI companies at scale.
To further facilitate these voluntary agreements, we recommended legislation in three key areas:
Transparency: Companies that make generative AI models commercially available should be required to disclose what copyrighted works were used in their training datasets and where they obtained them.
Labeling of AI Outputs: Content that is generated or significantly manipulated by AI should be clearly labeled as such, allowing consumers to make informed choices.
Antitrust Exemption: Authors should be permitted to form their own cooperatives to collectively manage and efficiently license their works for AI training, similar to those that exist in the music industry.
We strongly oppose extending copyright or similar rights to AI-generated works. Such protection would undermine the market for human-created works without providing any benefit to society. The tech companies developing AI already have ample incentives through existing patent, trade secret, and copyright protections for their technologies.
As the federal government develops its AI Action Plan, the Authors Guild will continue advocating for policies that respect creators’ rights while allowing for beneficial technological innovation. We believe future development of generative AI systems must be based on mutually beneficial partnerships between technology companies and creative professionals, rather than the exploitative model we’ve seen thus far.