Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a bunch of authors who accused the unreal intelligence company of using their books to coach its AI chatbot Claude without permission.
Anthropic and the plaintiffs in a court filing asked US District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount.
“If approved, this landmark settlement can be the most important publicly reported copyright recovery in history, larger than another copyright class motion settlement or any individual copyright case litigated to final judgment,” the plaintiffs said within the filing.
The proposed deal marks the primary settlement in a string of lawsuits against tech corporations including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to coach generative AI systems.
Anthropic as a part of the settlement said it can destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the corporate’s AI models.
In a press release, Anthropic said the corporate is “committed to developing secure AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.” The agreement doesn’t include an admission of liability.
Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the category motion against Anthropic last 12 months. They argued that the corporate, which is backed by Amazon and Alphabet, unlawfully used tens of millions of pirated books to show its AI assistant Claude to answer human prompts.
The writers’ allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech corporations stole their work to make use of in AI training.
The businesses have argued their systems make fair use of copyrighted material to create recent, transformative content.
Alsup ruled in June that Anthropic made fair use of the authors’ work to coach Claude, but found that the corporate violated their rights by saving greater than 7 million pirated books to a “central library” that might not necessarily be used for that purpose.
A trial was scheduled to start in December to find out how much Anthropic owed for the alleged piracy, with potential damages ranging into the a whole bunch of billions of dollars.
The pivotal fair-use query continues to be being debated in other AI copyright cases.
One other San Francisco judge hearing an analogous ongoing lawsuit against Meta ruled shortly after Alsup’s decision that using copyrighted work without permission to coach AI could be illegal in “many circumstances.”
Anthropic told a San Francisco federal judge on Friday that it has agreed to pay $1.5 billion to settle a class-action lawsuit from a bunch of authors who accused the unreal intelligence company of using their books to coach its AI chatbot Claude without permission.
Anthropic and the plaintiffs in a court filing asked US District Judge William Alsup to approve the settlement, after announcing the agreement in August without disclosing the terms or amount.
“If approved, this landmark settlement can be the most important publicly reported copyright recovery in history, larger than another copyright class motion settlement or any individual copyright case litigated to final judgment,” the plaintiffs said within the filing.
The proposed deal marks the primary settlement in a string of lawsuits against tech corporations including OpenAI, Microsoft and Meta Platforms over their use of copyrighted material to coach generative AI systems.
Anthropic as a part of the settlement said it can destroy downloaded copies of books the authors accused it of pirating, and under the deal it could still face infringement claims related to material produced by the corporate’s AI models.
In a press release, Anthropic said the corporate is “committed to developing secure AI systems that help people and organizations extend their capabilities, advance scientific discovery, and solve complex problems.” The agreement doesn’t include an admission of liability.
Writers Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed the category motion against Anthropic last 12 months. They argued that the corporate, which is backed by Amazon and Alphabet, unlawfully used tens of millions of pirated books to show its AI assistant Claude to answer human prompts.
The writers’ allegations echoed dozens of other lawsuits brought by authors, news outlets, visual artists and others who say that tech corporations stole their work to make use of in AI training.
The businesses have argued their systems make fair use of copyrighted material to create recent, transformative content.
Alsup ruled in June that Anthropic made fair use of the authors’ work to coach Claude, but found that the corporate violated their rights by saving greater than 7 million pirated books to a “central library” that might not necessarily be used for that purpose.
A trial was scheduled to start in December to find out how much Anthropic owed for the alleged piracy, with potential damages ranging into the a whole bunch of billions of dollars.
The pivotal fair-use query continues to be being debated in other AI copyright cases.
One other San Francisco judge hearing an analogous ongoing lawsuit against Meta ruled shortly after Alsup’s decision that using copyrighted work without permission to coach AI could be illegal in “many circumstances.”