Anthropic's Claude AI training gets US judge approval to use authored books

'Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use' says US judge William Alsup
An undated image. — Dreamstime
An undated image. — Dreamstime 

In what seems to be a blow to book authors and writers, a federal judge has allowed Anthropic, the renowned AI company, to train its Claude AI chatbot on pirated books without requiring authors' permission.

US District Court Judge William Alsup ruled on Monday that Anthropic's use of millions of pirated books fell under the “fair use” doctrine of the Copyright Act.

“Use of the books at issue to train Claude and its precursors was exceedingly transformative and was a fair use,” Alsup wrote in his ruling.

The ruling is expected to set a legal precedent in the United States and might help other AI firms faced with similar lawsuits. 

It should be noted that AI firms often defend their seemingly illicit practices of using copyrighted works to evade enfringement, claiming that training AI on huge data sets alters the original content and bolsters innovation. 

Many authors, musicians, and artists have sued AI companies for using their works without permission or compensation.

Court documents disclosed that not only did Anthropic download millions of pirated books, but also bought and scanned copyrighted texts to develop a comprehensive digital library.

Although the US judge ruled that Anthropic had no right to use pirated copies for its central library, he ordered a trial to determine damages pertaining to the relevant aspect of the copyright lawsuit filed by authors.