AI model providers notched two significant legal victories in late June 2025, as federal judges in San Francisco sided with Meta and Anthropic in high-profile copyright infringement cases brought by prominent authors. While these rulings represent important tactical wins for AI companies, they fall short of providing the definitive resolution that either side in this contentious battle was seeking. Instead, they signal a subtle but meaningful shift in leverage that could reshape how the broader conflict over AI training and copyright compensation unfolds across courtrooms, corporate boardrooms, and the halls of Congress.
Two Wins, Different Paths to Victory
The rulings came within days of each other, both coming from the same San Francisco federal court but taking distinctly different approaches to the core questions at stake. On June 23, U.S. District Judge William Alsup handed Anthropic a mixed but ultimately favorable ruling in a case brought by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. Two days later, U.S. District Judge Vince Chhabria dismissed a lawsuit against Meta brought by a group of well-known writers including comedian Sarah Silverman and acclaimed authors Jacqueline Woodson and Ta-Nehisi Coates.
The Anthropic decision broke new ground by becoming the first federal ruling to directly address the doctrine of fair use in the context of AI training. Judge Alsup found that Anthropic's use of copyrighted books to train its Claude large language model constituted fair use under U.S. copyright law, describing the process as "exceedingly transformative." In a particularly vivid analogy, Alsup wrote that "like any reader aspiring to be a writer, Anthropic's LLMs trained upon works not to race ahead and replicate or supplant them — but to turn a hard corner and create something different."
However, Alsup's ruling was not an unqualified victory for Anthropic. The judge found that the company's practice of downloading and storing more than 7 million pirated books in what he termed a "central library" did constitute copyright infringement and was not protected by fair use. This distinction—between the transformative act of training AI models and the potentially infringing act of acquiring copyrighted materials through piracy—may prove crucial as similar cases work their way through the courts.
The Meta case took a different trajectory entirely. Rather than reaching the merits of the fair use question, Judge Chhabria dismissed the lawsuit on procedural grounds, finding that the 13 authors who sued Meta had "made the wrong arguments" in their legal filings. Importantly, Chhabria was careful to note that his ruling "does not stand for the proposition that Meta's use of copyrighted materials to train its language models is lawful." Instead, he explained, "it stands only for the proposition that these plaintiffs made the wrong arguments and failed to develop a record in support of the right one."
The Leverage Shift: Subtle but Significant
While neither ruling definitively resolves the fundamental questions surrounding AI training and copyright law, together they represent a meaningful shift in the balance of power between AI companies and content creators. This shift manifests in several key ways that extend far beyond the immediate legal implications.
First, the Anthropic ruling provides AI companies with their first judicial endorsement of the fair use defense in the context of training AI models on copyrighted content. This precedent, while not necessarily binding on other courts, offers a roadmap for how AI companies can frame their training practices in future litigation. The ruling's emphasis on the "transformative" nature of AI training aligns closely with arguments that tech companies have been making in courtrooms and policy debates for months.
The language Judge Alsup used to describe AI training—comparing it to a human reader learning from books to become a writer—provides AI companies with powerful rhetorical ammunition. This framing positions AI training not as wholesale copying or theft, but as a fundamentally creative and educational process that copyright law should encourage rather than restrict. Such language could prove influential as other judges grapple with similar cases and as policymakers consider potential legislative responses.
Second, the Meta ruling, despite its procedural nature, sends a clear signal to copyright holders that victory in these cases is far from guaranteed. Judge Chhabria's decision to dismiss the case, while simultaneously suggesting that the authors could have prevailed with better legal arguments, creates a complex dynamic. On the one hand, it preserves the possibility of future successful challenges to AI training practices. On the other hand, it demonstrates the high bar that copyright holders must clear to succeed in court.
The judge's pointed criticism of the authors' legal strategy—and his apparent invitation for them to try again with better arguments—reveals the sophisticated legal maneuvering required to successfully challenge AI companies. This complexity favors defendants with deep pockets and experienced legal teams, potentially discouraging smaller copyright holders from pursuing litigation.
The Broader Battle Beyond the Courtroom
These legal developments are unfolding against the backdrop of a much larger struggle that extends far beyond individual lawsuits. The question of whether AI companies should compensate copyright holders for using their works in model training has become a flashpoint in debates over innovation policy, creative rights, and the future of human-AI collaboration.
In the marketplace, AI companies are leveraging their legal momentum to accelerate partnerships and licensing deals on terms increasingly favorable to them. The judicial validation of fair use arguments strengthens their negotiating position with publishers, media companies, and other content creators who might previously have demanded higher compensation or more restrictive terms for AI training licenses.
Meanwhile, the lobbying battle in Washington has intensified as both sides seek to shape potential federal legislation. AI companies argue that overly restrictive interpretations of copyright law could hamstring American competitiveness in the global AI race, potentially ceding leadership to countries with more permissive intellectual property regimes. Copyright holders counter that allowing unfettered use of their works without compensation threatens the economic foundation of creative industries and could ultimately diminish the very content that makes AI systems valuable.
The public discourse around these issues has also evolved, with the recent court victories providing AI companies with new talking points about the legality and legitimacy of their training practices. Meta's statement following its legal win exemplifies this approach: "Open-source AI models are powering transformative innovations, productivity and creativity for individuals and companies, and fair use of copyright material is a vital legal framework for building this transformative technology."
The Road Ahead: Uncertainty Amid Strategic Positioning
Despite these recent victories, the fundamental questions at the heart of the AI copyright debate remain unresolved. The Anthropic ruling's distinction between fair use in training and infringement in content acquisition suggests that future cases may turn on increasingly technical questions about how AI companies source their training data. Companies that can demonstrate they obtained copyrighted materials through legitimate means may find themselves on stronger legal ground than those who relied on pirated content.
The Meta ruling's procedural nature means that the core fair use questions in that case remain unanswered, leaving room for future litigation with different legal arguments. Judge Chhabria's apparent skepticism about AI companies becoming serial copyright infringers suggests that even sympathetic judges may be willing to impose limits on AI training practices under the right circumstances.
Looking forward, the leverage shift created by these rulings is likely to influence settlement negotiations in pending cases, shape the terms of future licensing agreements, and affect the strategic calculations of both AI companies and content creators. While AI companies can point to judicial validation of their fair use arguments, copyright holders can emphasize the courts' recognition that not all practices of AI model providers in accessing and using copyrighted works are automatically protected.
The ultimate resolution of these disputes may well come from Congress rather than the courts, as the complexity and novelty of the issues involved push against the boundaries of existing copyright doctrine. Until then, the battle will continue across multiple fronts, with each side seeking to leverage victories into broader strategic advantages in a conflict that will ultimately determine how the benefits and burdens of the AI revolution are distributed across society.
In its filing in support of its motion for summary judgment, Anthropic emphasized the revolutionary nature of LLM technology, describing Claude as a "radically transformational tool for creators of many kinds—writers, teachers, scientists, businesspeople and more—which enables new expression and innovation to flourish." This reasoning appears to have swayed Judge Alsup, who described AI training as a "spectacularly" transformative process. The question that remains is whether that revolution will be built on a foundation of fair compensation for the creators whose works make it possible, or whether the transformative nature of AI will be taken to justify a new paradigm where such compensation is neither legally required nor economically necessary. The recent court victories suggest that AI companies have gained important ground in that debate, but the war itself is far from over.