The New York Times reported that it has filed a lawsuit against OpenAI and Microsoft, accusing them of copyright infringement – unauthorized use of the publication's materials in training artificial intelligence models.
The plaintiff noted that it became the first major American news outlet to sue the developers of ChatGPT and other popular AI platforms for copyright infringement related to its materials.
The lawsuit, filed in the Federal District Court of Manhattan, claims that millions of articles from The New York Times were used to train chatbots that now compete with news agencies and claim to be a source of reliable information.
The lawsuit does not specify the exact amount of damages. However, it does state that the defendants should be held liable for "billions of dollars in statutory and actual damages" related to the "unlawful copying and use of works of unique value."
The plaintiff also demands the destruction of chatbot models and training data that use copyrighted materials. There was no comment from OpenAI and Microsoft.
The New York Times noted that in April, it tried to settle the conflict out of court – it contacted Microsoft and OpenAI, expressing concern about the use of its intellectual property and offering commercial and technological remedies, including restrictions on generative AI, but the negotiations did not yield any results. After requesting information about current events, chatbot users may receive an answer based on, among other things, former New York Times materials, be satisfied with it and refuse to go to the newspaper's website, which causes the resource to lose web traffic that is converted into advertising and subscription revenues.
The lawsuit cites several examples where the chatbot provided users with almost verbatim excerpts from The New York Times articles available only through a paid subscription.
At the same time, the publication does not reject new technologies: it recently hired an editorial director for AI initiatives, who is tasked with developing protocols for the use of AI in the newsroom and exploring ways to integrate AI into the work of journalists.
The article points to the reputational damage caused to The New York Times by chatbot "hallucinations" – AI failures when a chatbot provides inaccurate data. There were cases when Microsoft Bing Chat provided untrue information with reference to The New York Times, offering a list of "15 heart-healthy foods", 12 of which were not mentioned in the newspaper's publication.
As reported, the United States believes that China is using artificial intelligence for large-scale hacking operations.