AI
November 7, 2023
On October 30, 2023, a judge in the Northern District of California ruled in one of the first lawsuits between artists and generative AI art platforms for copyright infringement. While the judge quickly dismissed some of the Plaintiffs’ claims, the case is still very much alive as he is allowing them to address some of the problems in their case and file amended complaints.
So what’s it all about? Three artists are suing Stability AI Ltd. and Stability AI, Inc. (collectively, “Stability”), whose platform, Stable Diffusion, generates photorealistic images from text input. To teach Stable Diffusion how to generate images, Stability’s programmers scrape (i.e., take or steal, depending on how charitable you’re feeling) the Internet for billions of existing copyrighted images — among them, allegedly, images created by the Plaintiffs. End users (i.e., people like you and me) can then use Stability’s platform to create images in the style of the artists whose work the AI has been trained.
In addition to Stability, the proposed class action suit on behalf of other artists also names as defendants Midjourney, another art generation AI that incorporates Stable Diffusion, and DeviantArt, Inc., an online community for digital artists, which Stability scraped to train Stable Diffusion, and which also offers a platform called DreamUp that is built on Stable Diffusion.
The Plaintiffs — Sarah Andersen, Kelly McKernan, and Karla Ortiz — allege, among other things, that Defendants infringed on their copyrights, violated the Digital Millennium Copyright Act, and engaged in unfair competition.
In ruling on Defendants’ motion to dismiss, U.S. District Judge William Orrick quickly dismissed the copyright claims brought by McKernan and Ortiz against Stability because they hadn’t registered copyrights in their artworks — oops.
Anderson, however, had registered copyrights. Nonetheless, Stability argued her claim of copyright infringement should be dismissed because she couldn’t point to specific works that Stability used as training images. The Court rejected that argument. It concluded that the fact she could show that some of her registered works were used for training Stable Diffusion was enough at this stage to allege a violation of the copyright act.
The judge, however, dismissed Anderson’s direct infringement claim against DeviantArt and Midjourney. With DeviantArt, he found that Plaintiffs hadn’t alleged that DeviantArt had any affirmative role in copying Anderson’s images. For Midjourney, the judge found that Plaintiffs needed to clarify whether the direct infringement claim was based on Midjourney’s use of Stable Diffusion and/or whether Midjourney independently scraped images from the web and used them to train its product. Judge Orrick is allowing them to amend their complaint to do so.
Because Orrick dismissed the direct infringement claims against DeviantArt and Midjourney, he also dismissed the claims for vicarious infringement against them. (By way of background, vicarious infringement is where a defendant has the “right and ability” to supervise infringing conduct and has a financial interest in that conduct.) Again, however, the Court allowed Plaintiffs to amend their complaint to state claims for direct infringement against DeviantArt and Midjourney, and also to amend their complaint to allege vicarious infringement against Stability for the use of Stable Diffusion by third parties.
Orrick warned the Plaintiffs (and their lawyers) that he would “not be as generous with leave to amend on the next, expected rounds of motions to dismiss and I will expect a greater level of specificity as to each claim alleged and the conduct of each defendant to support each claim.”
Plaintiffs also alleged that Defendants violated their right of publicity, claiming that Defendants used their names to promote their AI products. However, the Court dismissed these claims because the complaint didn’t actually allege that the Defendants advertised their products using Plaintiffs’ names. Again, he allowed the Plaintiffs leave to amend. (The Plaintiffs originally tried to base a right of publicity claim on the fact that Defendants’ platforms allowed users to produce AI-generated works “in the style of” their artistic identities. An interesting idea, but Plaintiffs abandoned it.)
In addition, DeviantArt moved to dismiss Plaintiffs’ right of publicity claim on grounds that DeviantArt’s AI platform generated expressive content. Therefore, according to DeviantArt, the Court needed to balance the Plaintiff’s rights of publicity against DeviantArt’s interest in free expression by considering whether the output was transformative. (Under California law, “transformative use” is a defense to a right of publicity claim.) The Court found that this was an issue that couldn’t be decided on a motion to dismiss and would have to wait.
What are the key takeaways here? For starters, it is fair to say that the judge thought that Plaintiffs’ complaint was not a paragon of clarity. It also seems like the judge thought that Plaintiffs would have a hard time alleging that images created by AI platforms in response to user text input were infringing. However, he seemed to indicate that it was more likely to allow copyright infringement claims based on Stability’s use of images to train Stable Diffusion to proceed.
June 27, 2023
Intellectual property class action lawsuits have, historically, been relatively rare. But here, at the dawn of AI, everything is changing fast, and we already have what appears to be the first attempt at an AI-related class action: Young v. NeoCortext, Inc.
This action is currently pending in the Central District of California against the owners of Reface, a “deep fake” generative AI app that enables users to replace a celebrity’s face in a still photo from a film or TV with their own face. The app includes a searchable catalog that allows a user to select the star whose face they want to replace. This library includes images of Kyland Young — a finalist in season 23 of CBS’ Big Brother — who is seeking to represent a class of California residents including musicians, athletes, celebrities “and other well-known individuals” who have had their “name, voice, signature, photograph, or likeness” displayed in Reface.
Young alleges that Reface’s inclusion of his image violates his rights under California’s right of publicity statute. This law protects individuals against the unauthorized use of their image, name, or voice to advertise or sell a product. His claim hinges on a specific detail: Reface promotes paid subscriptions with a free version that allows users to generate an image with their face in the place of a celebrity. Images generated by the free version are watermarked with Reface’s logo and say “made with Reface app.” According to Young, this amounts to an ad for the paid version of the Reface app. Thus, he claims that Reface’s owner is exploiting his image (and the image of other celebrities and demi-celebrities) to encourage users to purchase the paid version of the app, which brings the app within the ambit of California’s right of publicity statute.
Lawyers for Neocortext, which owns the app, have moved to dismiss the complaint. They argue, among other things, that Plaintiff’s claims are preempted by the Copyright Act and are barred by the First Amendment.
On preemption, Defendant argues that since images of Young used on the app are owned by CBS, not Young, any action for the unauthorized use of these images would have to be brought by CBS, not Plaintiff. It argues that CBS’ claims (if any) would sound in copyright infringement, not a violation of the right of publicity. It seems likely that the Defendant will prevail on this argument.
Even if the Defendant doesn’t prevail on this argument and the case survives the motion to dismiss, this copyright issue could create problems certifying a class. One issue courts consider in determining whether a suit can be heard as a class action is “commonality.” This requires judges to consider if the potential class members (in this case, other celebrities) are likely to have more issues in common than not. The possibility that some claims might be preempted by copyright law while others are not might lead the judge to conclude that common issues don’t predominate. This could preclude the certification of the action as a class action.
Defendant also argues that Plaintiff’s claim should be dismissed because it “violates the expressive rights of Defendant and its users that are guaranteed by the First Amendment.” Here, Defendant claims that modifying celebrity images to convey an idea or message can be an exercise of creative self-expression within the scope of the First Amendment, and thus Reface performs a “transformative use,” which brings it outside of the ambit of California’s right of publicity statute.
All in all, at least on copyright preemption, Defendant’s arguments seem more convincing.
With that said, this lawsuit points to how AI is making it easier to manipulate celebrities’ images. This will undoubtedly lead to more right of publicity lawsuits.
June 13, 2023
Well, that didn’t take long.
A pair of lawyers and their firm have very publicly and quite thoroughly embarrassed themselves by asking ChatGPT for case citations that turn out to have been made up by the trendy AI chatbot.
There are so many points of stupidity and laziness here: The global frenzy to adopt ChatGPT, the inability or failure of attorneys to understand new technology, one lawyer’s unthinking reliance on the work of a colleague, a law firm practicing in an area it is not equipped to handle … Let’s break it all down.
New York City law firm Levidow, Levidow & Oberman was working on what is, in most ways, an entirely unremarkable lawsuit: Roberto Mata v. Avianca. Their client — Roberto Mata — sued the airline Avianca claiming that, while on a 2019 flight from San Salvador to New York’s JFK airport, an airline employee failed to take sufficient care in operating a metal serving cart that hit Mata in the knee and seriously injured him.
In January 2023 Avianca moved to dismiss the case in the Southern District of New York Court, asserting the statute of limitations had expired. In March, Plaintiff’s counsel — Peter LoDuca — replied with an affidavit claiming otherwise. In his affidavit, LoDuca cited decisions from several cases including Varghese v. China Southern Airlines and Zicherman v. Korean Air Lines, both of which were supposedly decided by the 11th Circuit Court of Appeals.
Avianca’s counsel quickly pointed out there was no evidence that those or other cases cited by Plaintiff’s counsel existed or, if they did exist, stood for the propositions that Plaintiff said they did.
The judge — P. Kevin Castel — was perplexed, and ordered LoDuca to file an affidavit attaching copies of the cases he cited. LoDuca complied — well, sort of. He submitted an affidavit that attached what he claimed were the official court decisions.
Defendant’s counsel again notified the Court that the cases did not exist or did not actually say what Plaintiff’s counsel had represented.
The judge, now rather angry, ordered LoDuca to show up in Court and explain exactly how he came to submit an affidavit — a sworn document — citing and attaching non-existent cases. In response, LoDuca submitted another affidavit saying that he had relied on Steven Schwartz, another attorney in his firm, to research and draft his affidavit. (By way of background, LoDuca and Schwartz have been practicing law for more than 30 years.)
And this is where the story goes from weird to bad. Really bad.
The reason LoDuca was appearing in Court instead of Schwartz is because Schwartz isn’t admitted to practice in federal court. He’s only admitted in state court where the case started out. To make matters worse, it turns out that despite the fact that Levidow, Levidow & Oberman were representing Mr. Mata in federal court, its lawyers didn’t have a subscription that allowed them to search federal cases.
Without this access to federal cases, Schwartz turned to what he thought was a new “super-search engine” (his words) he had heard about: ChatGPT. He typed questions, and the AI responded with what seemed to Schwartz to be genuine case citations, often peppered with friendly bot chat like “hope that helps!” What could possibly go wrong? A good deal. Because the cases ChatGPT provided Schwartz didn’t actually exist.
On June 8, 2023, the judge held a hearing to determine whether LoDuca, Schwartz, and their firm should be sanctioned.
At this hearing, LoDuca admitted he had neither read the cases cited nor made any legitimate effort to determine if they were real. He argued he had no reason not to rely on the citations Schwartz provided. Schwartz, embarrassed, said he had no reason to believe that ChatGPT wasn’t providing accurate information. Both admitted that, in hindsight, they should have been more skeptical. Counsel for Schwartz argued that lawyers are notoriously bad with technology (personally, I object to this characterization). Throughout the hearing, the packed courtroom gasped.
Cringe-inducing, to be sure. But looking deeper, there’s more to fault here than a tech-challenged attorney blindly relying on some “super search engine” to research case citations. The bigger problem is that, even after Avianca’s lawyers pointed out they couldn’t find any evidence that the cases existed or said what Plaintiff’s lawyer said they said, Plaintiff’s attorneys — LoDuca and Schwartz — persisted in trying to establish that the “cases” they relied on were real despite possessing absolutely no evidence for it. Even after Schwartz couldn’t find the cases through a Google search, neither he nor LoDuca checked the publicly available court records to see if the cases were real. Moreover, they seem to have disregarded some pretty clear signs that the “cases” were, at best, problematic. For example, one case begins as a wrongful death case against an airline and, a paragraph or two later, magically transforms into someone suing because he was inconvenienced when a flight was canceled.
Should the duo and their firm be sanctioned? In general, the standard for sanctions is whether those involved acted in bad faith. Everyone here insisted that their conduct did not meet this standard. Rather, they claimed they were simply mistaken in not knowing how ChatGPT worked or that it couldn’t be trusted.
The judge certainly didn’t seem to see things that way. He was appalled that Schwartz and DoLuca didn’t try to verify (or, apparently, even read) the “cases” they cited. In court, the judge read aloud a few lines from one of the fake opinions, pointing out the text was “legal gibberish.” In addition, while LoDuca, Schwartz and their firm might not have been trying to lie to the court, it’s hard to believe that they fulfilled their obligation to make “an inquiry reasonable under the circumstances,” which is what is required under one of the rules applicable here.
The judge reserved a decision on sanctions, so stay tuned.
May 2, 2023
The last few months have seen an explosion in chatter about AI, specifically, freely available AI chatbots and apps like ChatGPT, DALL-E-2, Soundful and more that can create text, images, and music in response to prompts entered by a user. Internet forums are overflowing with examples of people using these apps to create “a love song that sounds like it was played by the Beatles in 1966” or “a painting of a three-legged horse in the style of Picasso” or “a 2,000-word story about colonizing the moon by Ernest Hemingway.” ChatGPT is already the fastest-growing app of all time and, naturally, people fear these AIs will quickly replace actual humans for the creation of commercial art and entertainment. We’re also starting to see some lawsuits involving AI-generated art infringing on copyrights. Everything is pretty much at the complaint stage, so there’s not too much to report on — yet. With that said, what follows are some of the places where I think we’re going to see legal battles.
Before diving into the legal issues, it’s worth taking a step back and thinking about how AI works. At a high level, AI platforms take in a ton of information and “learn” patterns about that information. Show an AI a coffee cup, and it “knows” what a coffee cup is. “Feed” it a banana, and it can create another banana in any color or pattern a user asks of it. Everything an AI can do begins from something that already exists.
Ok. On to the legal issues that will have to be sorted out by creators or, ultimately, in the courts.
- How similar is the output of an AI platform to the material it was trained on? If it’s too close, the output could be infringing. On the other hand, if the output is based on unprotectable components, then there’s no infringement. However, the line between copyrighted and unprotected is not always clear.
- Do AI platforms need to license the underlying materials used to create a new image or song to avoid claims of copyright infringement? In order for an AI platform to review information, it needs to make a copy of it. If work being used as a basis for an AI-generated product is copyrighted (or copyrightable), unless the platform has obtained a license, the act of copying may be infringing.
- If AI imitates an artist, does the output infringe on the artist’s right of publicity which, in some states, extends to an artist’s persona?
- What happens if the output from an AI platform includes a trademark? It’s not hard to imagine AI creating works that include trademarks. One doubts trademark owners will be happy about this.
- Sometimes the AI platforms are just wrong or false. These statements could be defamatory, but who is legally responsible? The platform, or the person who gave the platform the prompts?
We’ll follow the action here as it unfolds.