OpenAI vs. New York Times: Battle Over AI Models and Copyright Infringement

Date:

NYTimes and ChatGPT app icons are displayed on a screen in this illustration taken December 27, 2023. REUTERS/Dado Ruvic/Illustration

The battle between the New York Times and OpenAI is heating up as the famous AI chatbot firm asked a federal judge to throw away some of the claims The Times has made in their lawsuit.

OpenAI has asked a federal judge to dismiss parts of The New York Times’ copyright lawsuit against it as it is fighting back against one of the well-known names in the print media.

OpenAI’s battle against the New York Times was prompted by the newspaper company filing a lawsuit against OpenAI in December of 2023, where it blamed OpenAI for using copyrighted material from the newspapers without permission and trained its AI chat bot using the data from those newspapers.

In response to the allegations made by the Times, OpenAI said that the New York Times tried to hack ChatGPT after paying someone to carry out the attempt in a filing made to the court. They say that the newspaper outlet tried to hack other AI systems as well in a bit to generate misleading evidence for the case.

OpenAI’s reference to hacking in their legal filing could be termed prompt engineering or red-teaming, as suggested by the attorney representing The New York Times.

What are Prompt Engineering and Red Teaming?

Prompt Engineering: Prompt engineering involves designing or refining the prompts given to AI models to achieve specific desired outputs.

The process entails crafting the input provided to the model in a way that guides it toward producing the desired responses or behaviours.

By carefully constructing prompts, developers can influence the output of AI models to better suit their needs or objectives.

Red Teaming: Red teaming is a practice where a group of individuals, known as the red team, is tasked with challenging or testing a system, organization, or strategy from an adversarial perspective.

The approach is commonly used in security contexts to identify vulnerabilities, weaknesses, or potential failures that might not be apparent to those responsible for designing or implementing the system.

Red teaming can help organizations improve their defences, enhance preparedness, and identify areas for improvement by simulating realistic attack scenarios.

What is OpenAI’s case against the New York Times?

While OpenAI has blamed the New York Times company, it didn’t identify specific individuals the Times employed to create misleading outputs from its systems. By not identifying these individuals, OpenAI avoids directly accusing the newspaper of violating laws related to unauthorized access to computer systems, commonly known as anti-hacking laws.

In their legal filing, OpenAI stated that The New York Times’ accusations did not adhere to the newspaper’s renowned standards of journalism. OpenAI asserted that the truth they believe will be revealed during the case is that The Times compensated an individual to manipulate OpenAI’s products unlawfully.

However, The New York Times’ attorney, Ian Crosby, countered that OpenAI’s claim of “hacking” was simply an attempt to utilize OpenAI’s products to gather evidence related to the alleged theft and replication of The New York Times’ copyrighted material.

In the court filing, OpenAI said: “The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.”

It is worth mentioning that the Times’ case against OpenAI is not limited to AI firms; it also expands its reach to Microsoft, the leading financial supporter of AI firms.

The lawsuit draws on the United States Constitution and the Copyright Act to uphold The New York Times’ original journalism.

It highlights Microsoft’s Bing AI, claiming it generates direct excerpts from the Times’ content. The New York Times is among numerous copyright holders pursuing legal action against technology companies for allegedly misappropriating their content in AI training.

Other parties, such as authors, visual artists, and music publishers, have also initiated similar lawsuits. The lawsuit reflects broader ethical dilemmas surrounding AI technology, particularly regarding intellectual property rights, fair use, and the responsibilities of tech companies when utilizing copyrighted material to train AI models.

It brings attention to the need for clear regulations and guidelines to ensure ethical AI development practices that respect content creators’ rights while fostering innovation and progress in artificial intelligence.

The Jury is still out on fair use of Copyright material

OpenAI has argued that training sophisticated AI models requires incorporating copyrighted material to be feasible. They explained to the United Kingdom House of Lords that since copyright protects various forms of human expression, it is virtually only possible to train advanced AI models using copyrighted works.

Leading firms that work on AI technology contend that their AI systems moderately utilize copyrighted material. They stress that legal actions like these lawsuits threaten the potential growth of the multitrillion-dollar industry.

The basis they use in their argument is that their models do not violate copyright laws because they transform the original work, which they believe qualifies as fair use under U.S. laws.

Fair use is a legal doctrine in the U.S. that permits limited use of copyrighted material without obtaining permission from the copyright holder.

A legal doctrine is a principle or rule established by precedent or legislation that guides legal interpretation and decision-making in courts. It serves as a foundation for legal analysis and forms the basis for legal arguments and judgments.

Several factors are considered in determining whether using copyrighted material qualifies as fair use. These factors include the purpose of the use, mainly whether it’s for commercial gain and whether it adversely affects the original creator’s ability to profit from their work by competing directly with their works.

Courts have yet to definitively rule whether AI training falls under fair use within copyright law. However, specific infringement claims concerning outputs from generative AI systems have been dismissed due to a lack of evidence demonstrating that the AI-generated content closely resembled copyrighted works.

These claims in question were filled by various Artists who faced a setback in their copyright battle against generative AI firms as a class-action lawsuit against several companies, including Midjourney, DeviantArt, and Stability AI.

A U.S. judge dismissed the claims due to insufficient evidence. Judge William Orrick of the California District Court deemed the lawsuit flawed but allowed one member’s copyright claim against Stability to proceed.

The judge also granted the class 30 days to submit a revised suit with additional evidence, recognizing the complexity of determining whether copyright infringement occurred during AI training or execution.

Initially filed in January 2023, the lawsuit alleged that Stability’s AI model, Stable Diffusion, unlawfully collected billions of copyrighted images, including those belonging to artists, without proper authorization for training purposes.

Additionally, the suit claimed that DeviantArt integrated Stable Diffusion into its platform, potentially copying millions of images without proper licensing and violating its terms of service.

This debate will be long, and the issue won’t be resolved quickly. A better approach would be to find a balance where AI models can use data to train their models, and content creators still retain the value of their work. It would be interesting to see how a consensus will be reached between the two opposing entities.

LEAVE A REPLY

Please enter your comment!
Please enter your name here


Share post:

spot_imgspot_img

Popular

NYTimes and ChatGPT app icons are displayed on a screen in this illustration taken December 27, 2023. REUTERS/Dado Ruvic/Illustration

The battle between the New York Times and OpenAI is heating up as the famous AI chatbot firm asked a federal judge to throw away some of the claims The Times has made in their lawsuit.

OpenAI has asked a federal judge to dismiss parts of The New York Times’ copyright lawsuit against it as it is fighting back against one of the well-known names in the print media.

OpenAI’s battle against the New York Times was prompted by the newspaper company filing a lawsuit against OpenAI in December of 2023, where it blamed OpenAI for using copyrighted material from the newspapers without permission and trained its AI chat bot using the data from those newspapers.

In response to the allegations made by the Times, OpenAI said that the New York Times tried to hack ChatGPT after paying someone to carry out the attempt in a filing made to the court. They say that the newspaper outlet tried to hack other AI systems as well in a bit to generate misleading evidence for the case.

OpenAI’s reference to hacking in their legal filing could be termed prompt engineering or red-teaming, as suggested by the attorney representing The New York Times.

What are Prompt Engineering and Red Teaming?

Prompt Engineering: Prompt engineering involves designing or refining the prompts given to AI models to achieve specific desired outputs.

The process entails crafting the input provided to the model in a way that guides it toward producing the desired responses or behaviours.

By carefully constructing prompts, developers can influence the output of AI models to better suit their needs or objectives.

Red Teaming: Red teaming is a practice where a group of individuals, known as the red team, is tasked with challenging or testing a system, organization, or strategy from an adversarial perspective.

The approach is commonly used in security contexts to identify vulnerabilities, weaknesses, or potential failures that might not be apparent to those responsible for designing or implementing the system.

Red teaming can help organizations improve their defences, enhance preparedness, and identify areas for improvement by simulating realistic attack scenarios.

What is OpenAI’s case against the New York Times?

While OpenAI has blamed the New York Times company, it didn’t identify specific individuals the Times employed to create misleading outputs from its systems. By not identifying these individuals, OpenAI avoids directly accusing the newspaper of violating laws related to unauthorized access to computer systems, commonly known as anti-hacking laws.

In their legal filing, OpenAI stated that The New York Times’ accusations did not adhere to the newspaper’s renowned standards of journalism. OpenAI asserted that the truth they believe will be revealed during the case is that The Times compensated an individual to manipulate OpenAI’s products unlawfully.

However, The New York Times’ attorney, Ian Crosby, countered that OpenAI’s claim of “hacking” was simply an attempt to utilize OpenAI’s products to gather evidence related to the alleged theft and replication of The New York Times’ copyrighted material.

In the court filing, OpenAI said: “The allegations in the Times’s complaint do not meet its famously rigorous journalistic standards. The truth, which will come out in this case, is that the Times paid someone to hack OpenAI’s products.”

It is worth mentioning that the Times’ case against OpenAI is not limited to AI firms; it also expands its reach to Microsoft, the leading financial supporter of AI firms.

The lawsuit draws on the United States Constitution and the Copyright Act to uphold The New York Times’ original journalism.

It highlights Microsoft’s Bing AI, claiming it generates direct excerpts from the Times’ content. The New York Times is among numerous copyright holders pursuing legal action against technology companies for allegedly misappropriating their content in AI training.

Other parties, such as authors, visual artists, and music publishers, have also initiated similar lawsuits. The lawsuit reflects broader ethical dilemmas surrounding AI technology, particularly regarding intellectual property rights, fair use, and the responsibilities of tech companies when utilizing copyrighted material to train AI models.

It brings attention to the need for clear regulations and guidelines to ensure ethical AI development practices that respect content creators’ rights while fostering innovation and progress in artificial intelligence.

The Jury is still out on fair use of Copyright material

OpenAI has argued that training sophisticated AI models requires incorporating copyrighted material to be feasible. They explained to the United Kingdom House of Lords that since copyright protects various forms of human expression, it is virtually only possible to train advanced AI models using copyrighted works.

Leading firms that work on AI technology contend that their AI systems moderately utilize copyrighted material. They stress that legal actions like these lawsuits threaten the potential growth of the multitrillion-dollar industry.

The basis they use in their argument is that their models do not violate copyright laws because they transform the original work, which they believe qualifies as fair use under U.S. laws.

Fair use is a legal doctrine in the U.S. that permits limited use of copyrighted material without obtaining permission from the copyright holder.

A legal doctrine is a principle or rule established by precedent or legislation that guides legal interpretation and decision-making in courts. It serves as a foundation for legal analysis and forms the basis for legal arguments and judgments.

Several factors are considered in determining whether using copyrighted material qualifies as fair use. These factors include the purpose of the use, mainly whether it’s for commercial gain and whether it adversely affects the original creator’s ability to profit from their work by competing directly with their works.

Courts have yet to definitively rule whether AI training falls under fair use within copyright law. However, specific infringement claims concerning outputs from generative AI systems have been dismissed due to a lack of evidence demonstrating that the AI-generated content closely resembled copyrighted works.

These claims in question were filled by various Artists who faced a setback in their copyright battle against generative AI firms as a class-action lawsuit against several companies, including Midjourney, DeviantArt, and Stability AI.

A U.S. judge dismissed the claims due to insufficient evidence. Judge William Orrick of the California District Court deemed the lawsuit flawed but allowed one member’s copyright claim against Stability to proceed.

The judge also granted the class 30 days to submit a revised suit with additional evidence, recognizing the complexity of determining whether copyright infringement occurred during AI training or execution.

Initially filed in January 2023, the lawsuit alleged that Stability’s AI model, Stable Diffusion, unlawfully collected billions of copyrighted images, including those belonging to artists, without proper authorization for training purposes.

Additionally, the suit claimed that DeviantArt integrated Stable Diffusion into its platform, potentially copying millions of images without proper licensing and violating its terms of service.

This debate will be long, and the issue won’t be resolved quickly. A better approach would be to find a balance where AI models can use data to train their models, and content creators still retain the value of their work. It would be interesting to see how a consensus will be reached between the two opposing entities.

More like this
Related

Spot vs. Margin Trading: Understanding Crypto Trading Basics

Spot and Margin trading are two popular methods used...

Effective Techniques for Crypto Portfolio Optimization

Cryptocurrency investing has emerged as a dynamic and potentially...

Crypto Day Trading Essentials: Strategies, Risks, and Insights

The world of finance has embraced a new asset...

The Impact of Economic Indicators on Stock Market Performance

Economic indicators serve as crucial barometers of a nation's...