ChatGPT, generative AI models face legal minefield

ChatGPT, generative AI models face legal minefield

Detractors say AI is violating IP laws by hoovering up information without getting the rights to it, and things might only get worse from here

As the excitement over Generative Artificial Intelligence (AI), that uses unsupervised or semi-supervised learning to process large amounts of data and generate original outputs, reached fever pitch with the launch of OpenAI’s ChatGPT, a conversational AI chatbot, a volley of lawsuits have been filed against OpenAI, Microsoft, among others challenging their very unique proposition of creating new text, artworks, photographs, computer code, or creative works.

At the heart of the issue is Intellectual Property (IP). To put it simply, Generative AI needs zettabytes of data to create ‘original’ work and the question is over the ownership of the data that has to be fed. Detractors say AI is violating intellectual property laws by hoovering up information without getting the rights to it, and that things will only get worse from here.

EU leads with its AI Act

Governments have also woken up to the possibilities of copyright infringement by Generative AI models. The EU artificial intelligence (AI) act is the first law on AI by a major regulator anywhere in the world. Lawmakers in Europe are working on rules for image-producing AI generative models, such as DALL-E, Stable Diffusion, and Midjourney. Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard, determining to what extent AI has a positive rather than negative effect on your life wherever you may be.

Microsoft, GitHUB, OpenAI face class-action suits

Meanwhile, Microsoft, GitHub and OpenAI are currently being sued in a class action motion that accuses them of violating copyright law by allowing Copilot, a code-generating AI system trained on billions of lines of public code, to regurgitate licensed code snippets without providing credit. Two companies behind popular AI art tools, Midjourney and Stability AI, are in the crosshairs of a legal case that alleges they infringed on the rights of millions of artists by training their tools on web-scraped images.

Artists up in arms against AI generated art

A group of artists — Sarah Andersen, Kelly McK­er­nan, and Karla Ortiz — have filed a class-action lawsuit against Midjourney and Stability AI, companies behind AI art tools Midjourney and Stable Diffusion, and DeviantArt, which recently launched its own artificial intelligence art generator, DreamUp. The suit alleges that these companies “violated the rights of millions of artists” by using billions of internet images to use train its AI art tool without the “consent of artists and without compensating any of those artists.” These companies “benefit commercially and profit richly from the use of copyrighted images,” the suit alleges. “The harm to artists is not hypothetical,” the suit says, noting that works created by generative AI art are “already sold on the internet, siphoning commissions from the artists themselves.”

Getty accuses Stability AI of copyright violation

Perhaps in the most interesting of these legal salvoes is the one filed by stock image supplier Getty Images which took Stability AI to court for reportedly using millions of images from its site without permission to train Stable Diffusion, an art-generating AI.

The stock photography company is accusing Stability AI of “brazen infringement of Getty Images’ intellectual property on a staggering scale.” It claims that Stability AI copied more than 12 million images from its database “without permission … or compensation … as part of its efforts to build a competing business,” and that the start-up has infringed on both the company’s copyright and trademark protections.

The lawsuit is the latest volley in the ongoing legal struggle between the creators of AI art generators and rights-holders. AI art tools require illustrations, artwork, and photographs to use as training data, and often scrape it from the web without the creator’s consent.

Microsoft faces legal battle

An IP lawsuit against Microsoft charges that the Microsoft code repository GitHub, and OpenAI, the parent of ChatGPT, have illegally used code created by others in order to build and train the Copilot service that uses AI to write software. (Microsoft has invested $1 billion in OpenAI.) The future of AI may well hinge on the suit’s outcome.

Matthew Butterick, a programmer, writer, and lawyer, and the Joseph Saveri law firm have filed a class action suit against Microsoft GitHub and OpenAI, claiming they “profit from the work of open-source programmers by violating the conditions of their open-source licenses.” Shorn of legalese it says Microsoft and the others are stealing the intellectual property of those who created the code used to train Copilot. (Butterick says that sometimes when someone asks Copilot to write software, the resulting code is an exact copy of open-source code on which Copilot was trained.)

If they win this legal battle, then it puts SalesForce in the cross hairs of a similar action. CodeT5 is an open-source programming language model built by researchers at SalesForce. It is based on Google’s T5 (Text-to-Text Transfer Transformer) framework. In order to train CodeT5, the team sourced over 8.35 million instances of code, including user comments, from publicly accessible GitHub repositories.

Google’s Bard steps into uncharted waters

Meanwhile, Google is coming out its version of a Generative AI chatbot called Bard, which is based on a powerful AI model LaMDA, which Google first announced in May 2021 and is based on similar technology to ChatGPT. Google says this will allow it to offer the chatbot to more users and gather feedback to help address challenges around the quality and accuracy of the chatbot’s responses.

Google and OpenAI are both building their bots on text generation software that, while eloquent, is prone to fabrication and can replicate unsavoury styles of speech picked up online. The need to mitigate those flaws, and the fact that this type of software cannot easily be updated with new information, poses a challenge for hopes of building powerful and lucrative new products on top of the technology, including the suggestion that chatbots could reinvent web search. This is where things might go wrong and make Google also vulnerable to legal action.

© 2024 Praxis. All rights reserved. | Privacy Policy
   Contact Us