Anthropic admits Claude AI chatbot generated false citation in legal battle with music publishers

Anthropic admits Claude AI chatbot generated false citation in legal battle with music publishers

A lawyer representing Anthropic has acknowledged the company’s Claude AI chatbot generated a false citation that formed part of testimony in its ongoing legal battle with music publishers.

In a court filing in Northern California, the lawyer admitted the citation included “an inaccurate title and inaccurate authors” during the testimony of an expert witness called by Anthropic to support its case.

The admission came after the publishers, including Universal Music Group, Concord, and ABKCO, accused Olivia Chen of citing a non-existent academic article in her testimony.

U.S. Magistrate Judge Susan van Keulen ordered a response from Anthropic by May 15 after the publishers raised the issue. The citation in question was part of a filing submitted on April 30.

In that filing, Chen argued for specific parameters regarding the frequency of Claude reproducing copyrighted lyrics, which the publishers claim was done without permission.

Matt Oppenheim, representing the music publishers, confirmed the cited article did not exist and suggested Chen likely relied on Claude to develop her argument. He added he didn’t believe she intended to deceive.

Anthropic’s attorney Sy Damle said the citation was a mis-citation rather than a fabrication and said it was “disappointing” that the plaintiffs had not raised the issue sooner.

The incident is the latest in an ongoing battle between copyright owners and tech companies over the use of copyrighted material to train AI systems. It’s part of a wider trend of legal challenges involving AI-generated content.

Read more