Copyright
May 29, 2024
If you’re into hip hop, you’re probably familiar with a photograph of rapper The Notorious B.I.G., a/k/a Biggie Smalls, looking contemplative behind designer shades with the Twin Towers of the World Trade Center in the distance behind him. However, you might not know this well-known portrait has been the subject of litigation for the past five years and that litigation settled just before a trial was supposed to start earlier this year.
Photographer Chi Modu snapped the picture (the “Photo”) in 1996, originally for the cover of The Source hip hop monthly. However, after the magazine used another image, and Biggie was killed a year later, Modu began licensing the image to various companies, including Biggie’s heirs’ own marketing company. The Photo became famous and, after the destruction of the World Trade Center in 2001, what is now commonly called “iconic.”
There was no beef (do people still say that?) between Modu and Biggie’s heirs until 2018 when, according to an attorney for Chi Modu’s widow (the photographer himself passed away in 2021), Modu tried to negotiate increased licensing fees with Notorious B.I.G. LLC (“BIG”), which owns and controls the intellectual property rights of the late rapper’s estate.
Apparently, Modu and BIG weren’t able to reach an agreement because BIG brought suit against Modu and a maker of snowboards bearing the image, asserting claims for federal unfair competition and false advertising, trademark infringement, violation of state unfair competition law and violation of the right of publicity.
In a countersuit, Modu asserted his copyright in the Photo of Biggie preempted all of BIG’s claims. Modu argued that BIG’s claims were nothing more than an attempt to interfere with his right to reproduce and distribute the Photo, as permitted under Section 301 of the Copyright Act.
In December 2021, BIG sought a preliminary injunction barring Modu from selling merchandise including skateboards, shower curtains and NFTs incorporating the Photo, claiming these uses violated its exclusive control of Biggie’s right of publicity. (BIG had previously settled with the snowboard manufacturer.)
In June 2022, the Court granted the injunction in part, concluding that the sale of skateboards and shower curtains was not preempted by the Copyright Act as they did not involve the sale of the Photo itself but rather items featuring the Photo. In its order, it prohibited Modu’s estate from selling merchandise featuring the Photo or licensing the Photo for such use. However, the Court permitted Modu’s estate to continue selling reproductions of Modu’s photo as “posters, prints and Non-Fungible Tokens (‘NFTs’).” The Court found that the posters, prints and NFTs were “within the subject matter of the Copyright Act [as t]hey relate to the display and distribution of the copyrighted works themselves, without a connection to other merchandise or advertising.”
This decision follows a line of cases that distinguish between the exploitation of a copyright and the sale of products “offered for sale as more than simply a reproduction of the image.” In the former situation, a copyrighted work will take precedence over a right of publicity claim. In the latter, where the products feature something more than just the copyrighted work, a right of publicity claim is likely to prevail.
Not surprisingly, given the risks for both sides, the case settled just prior to the start of the trial. Although the contours of the settlement were not made public, presumably it allowed Modu’s estate to continue selling the Photo as posters of it remain available for purchase on the photographer’s website.
January 16, 2024
I closed out 2023 by writing about one lawsuit over AI and copyright and we’re starting 2024 the same way. In that last post, I focused on some of the issues I expect to come up this year in lawsuits against generative AI companies, as exemplified in a suit filed by the Authors Guild and some prominent novelists against OpenAI (the company behind ChatGPT). Now, the New York Times Company has joined the fray, filing suit late in December against Microsoft and several OpenAI affiliates. It’s a big milestone: The Times Company is the first major U.S. media organization to sue these tech behemoths for copyright infringement.
As always, at the heart of the matter is how AI works: Companies like OpenAI ingest existing text databases, which are often copyrighted, and write algorithms (called large language models, or LLMs) that detect patterns in the material so that they can then imitate it to create new content in response to user prompts.
The Times Company’s complaint, which was filed in the Southern District of New York on December 27, 2023, alleges that by using New York Times content to train its algorithms, the defendants directly infringed on the New York Times’ copyright. It further alleges that the defendants engaged in contributory copyright infringement and that Microsoft engaged in vicarious copyright infringement. (In short, contributory copyright infringement is when a defendant was aware of infringing activity and induced or contributed to that activity; vicarious copyright infringement is when a defendant could have prevented — but didn’t — a direct infringer from acting, and financially benefits from the infringing activity.) Finally, the complaint alleges that the defendants violated the Digital Millennium Copyright Act by removing copyright management information included in the New York Times’ materials, and accuses the defendants of engaging in unfair competition and trademark dilution.
The defendants, as always, are expected to claim they’re protected under “fair use” because their unlicensed use of copyrighted content to train their algorithms is transformative.
What all this means is that while 2023 was the year that generative AI exploded into the public’s consciousness, 2024 (and beyond) will be when we find out what federal courts think of the underlying processes fueling this latest data revolution.
I’ve read the New York Times’ complaint (so you don’t have to) and here are some takeaways:
- The Times Company tried (unsuccessfully) to negotiate with OpenAI and Microsoft (a major investor in OpenAI) but were unable to reach an agreement that would “ensure [The Times] received fair value for the use of its content.” This likely hurts the defendants’ claims of fair use.
- As in the other lawsuits against OpenAI and similar companies, there’s an input problem and an output problem. The input problem comes from the AI companies ingesting huge amounts of copyrighted data from the web. The output problem comes from the algorithms trained on the data spitting out material that is identical (or nearly identical) to what they ingested. In these situations, I think it’s going to be rough going for the AI companies’ fair use claim. However, they have a better fair use argument where the AI models create content “in the style of” something else.
- The Times Company’s case against Microsoft comes, in part, from the fact that Microsoft is alleged to have “created and operated bespoke computing systems to execute the mass copyright infringement . . .” described in the complaint.
- OpenAI allegedly favored “high-quality content, including content from the Times” in training its LLMs.
- When prompted, ChatGPT can regurgitate large portions of the Times’ journalism nearly verbatim. Here’s an example taken from the complaint showing the output of ChatGPT on the left in response to “minimal prompting,” and the original piece from the New York Times on the right. (The differences are in black.)

- According to the New York Times this content, easily accessible for free through OpenAI, would normally only be available behind their paywall. The complaint also contains similar examples from Bing Chat (a Microsoft product) that go far beyond what you would get in a normal search using Bing. (In response, OpenAI says that this kind of wholesale reproduction is rare and is prohibited by its terms of service. I presume that OpenAI has since fixed this issue, but that doesn’t absolve OpenAI of liability.)
- Because OpenAI keeps the design and training of its GPT algorithms secret, the confidentiality order here will be intense because of the secrecy around how OpenAI created its LLMs.
- While the New York Times Company can afford to fight this battle, many smaller news organizations lack the resources to do the same. In the complaint, the Times Company warns of the potential harm to society of AI-generated “news,” including its devastating effect on local journalism which, if the past is any indication, will be bad for all of us.
Stay tuned. OpenAI and Microsoft should file their response, which I expect will be a motion to dismiss, in late-February or so. When I get those, I’ll see you back here.
December 19, 2023
This year has brought us some of the early rounds of the fights between creators and AI companies, notably Microsoft, Meta, and OpenAI (the company behind ChatGPT). In addition to the Hollywood strikes, we’ve also seen several lawsuits between copyright owners and companies developing AI products. The claims largely focus on the AI companies’ creation of “large language models” or “LLMs.” (By way of background, LLMs are algorithms that take a large amount of information and use it to detect patterns so that it can create its own “original” content in response to user prompts.)
Among these cases is one filed by the Authors Guild and several prominent writers (including Jonathan Franzen and Jodi Picoult) in the Southern District of New York. It alleges OpenAI ingested large databases of copyrighted materials, including the plaintiffs’ works, to train their algorithms. In early December, the plaintiffs amended their complaint to add Microsoft as a defendant alleging that Microsoft knew about and assisted OpenAI in its infringement of the plaintiffs’ copyrights.
Because it is the end of the year, here are five “things to look for in 2024” in this case (and others like it):
- What will defendants argue on fair use and how will the Supreme Court’s 2023 decision in Goldsmith impact this argument? (In 2023 the SCOTUS ruled that Andy Warhol’s manipulation of a photograph by Lynn Goldsmith was not transformative enough to qualify as fair use.)
- Does the fact that the output of platforms like ChatGPT isn’t copyrightable have any impact on the fair use analysis? The whole idea behind fair use is to encourage subsequent creators to build on the work of earlier creators, but what happens to this analysis when the later “creator” is merely a computer doing what it was programmed to do?
- Will the fact that OpenAI recently inked a deal with Axel Springer (publisher of Politico and Business Insider) to allow OpenAI to summarize its news articles as well as use its content as training data for OpenAI’s large language models affect OpenAI’s fair use argument?
- What impact, if any, will this and other similar cases have on the business model for AI? Big companies and venture capital firms have invested heavily in AI, but if courts rule they must pay authors and other creators for their copyrighted works it dramatically changes the profitability of this model. Naturally, tech companies are putting forth numerous arguments against payment, including how little each individual creator would get considering how large the total pool of creators is, how it would curb innovation, etc. (One I find compelling is the idea that training a machine on copyrighted text is no different from a human reading a bunch of books and then using the knowledge and sense of style gained to go out and write one of their own.)
- Is Microsoft, which sells (copyrighted) software, ok with a competitor training its platform on copyrighted materials? I’m guessing that’s probably not ok.
These are all big questions with a lot at stake. For good and for ill, we live in exciting times, and in the arena of copyright and IP law I guarantee that 2024 will be an exciting year. See you then!
November 7, 2023
On October 30, 2023, a judge in the Northern District of California ruled in one of the first lawsuits between artists and generative AI art platforms for copyright infringement. While the judge quickly dismissed some of the Plaintiffs’ claims, the case is still very much alive as he is allowing them to address some of the problems in their case and file amended complaints.
So what’s it all about? Three artists are suing Stability AI Ltd. and Stability AI, Inc. (collectively, “Stability”), whose platform, Stable Diffusion, generates photorealistic images from text input. To teach Stable Diffusion how to generate images, Stability’s programmers scrape (i.e., take or steal, depending on how charitable you’re feeling) the Internet for billions of existing copyrighted images — among them, allegedly, images created by the Plaintiffs. End users (i.e., people like you and me) can then use Stability’s platform to create images in the style of the artists whose work the AI has been trained.
In addition to Stability, the proposed class action suit on behalf of other artists also names as defendants Midjourney, another art generation AI that incorporates Stable Diffusion, and DeviantArt, Inc., an online community for digital artists, which Stability scraped to train Stable Diffusion, and which also offers a platform called DreamUp that is built on Stable Diffusion.
The Plaintiffs — Sarah Andersen, Kelly McKernan, and Karla Ortiz — allege, among other things, that Defendants infringed on their copyrights, violated the Digital Millennium Copyright Act, and engaged in unfair competition.
In ruling on Defendants’ motion to dismiss, U.S. District Judge William Orrick quickly dismissed the copyright claims brought by McKernan and Ortiz against Stability because they hadn’t registered copyrights in their artworks — oops.
Anderson, however, had registered copyrights. Nonetheless, Stability argued her claim of copyright infringement should be dismissed because she couldn’t point to specific works that Stability used as training images. The Court rejected that argument. It concluded that the fact she could show that some of her registered works were used for training Stable Diffusion was enough at this stage to allege a violation of the copyright act.
The judge, however, dismissed Anderson’s direct infringement claim against DeviantArt and Midjourney. With DeviantArt, he found that Plaintiffs hadn’t alleged that DeviantArt had any affirmative role in copying Anderson’s images. For Midjourney, the judge found that Plaintiffs needed to clarify whether the direct infringement claim was based on Midjourney’s use of Stable Diffusion and/or whether Midjourney independently scraped images from the web and used them to train its product. Judge Orrick is allowing them to amend their complaint to do so.
Because Orrick dismissed the direct infringement claims against DeviantArt and Midjourney, he also dismissed the claims for vicarious infringement against them. (By way of background, vicarious infringement is where a defendant has the “right and ability” to supervise infringing conduct and has a financial interest in that conduct.) Again, however, the Court allowed Plaintiffs to amend their complaint to state claims for direct infringement against DeviantArt and Midjourney, and also to amend their complaint to allege vicarious infringement against Stability for the use of Stable Diffusion by third parties.
Orrick warned the Plaintiffs (and their lawyers) that he would “not be as generous with leave to amend on the next, expected rounds of motions to dismiss and I will expect a greater level of specificity as to each claim alleged and the conduct of each defendant to support each claim.”
Plaintiffs also alleged that Defendants violated their right of publicity, claiming that Defendants used their names to promote their AI products. However, the Court dismissed these claims because the complaint didn’t actually allege that the Defendants advertised their products using Plaintiffs’ names. Again, he allowed the Plaintiffs leave to amend. (The Plaintiffs originally tried to base a right of publicity claim on the fact that Defendants’ platforms allowed users to produce AI-generated works “in the style of” their artistic identities. An interesting idea, but Plaintiffs abandoned it.)
In addition, DeviantArt moved to dismiss Plaintiffs’ right of publicity claim on grounds that DeviantArt’s AI platform generated expressive content. Therefore, according to DeviantArt, the Court needed to balance the Plaintiff’s rights of publicity against DeviantArt’s interest in free expression by considering whether the output was transformative. (Under California law, “transformative use” is a defense to a right of publicity claim.) The Court found that this was an issue that couldn’t be decided on a motion to dismiss and would have to wait.
What are the key takeaways here? For starters, it is fair to say that the judge thought that Plaintiffs’ complaint was not a paragon of clarity. It also seems like the judge thought that Plaintiffs would have a hard time alleging that images created by AI platforms in response to user text input were infringing. However, he seemed to indicate that it was more likely to allow copyright infringement claims based on Stability’s use of images to train Stable Diffusion to proceed.
October 24, 2023
It’s long been known that one of the pitfalls of being in the public eye is you don’t control your own image. Paparazzi can take photos of you that can be published anywhere, with the photographer getting paid, the media outlet generating revenue from ad sales and subscriptions, and the subject themselves neither seeing a dime nor having any control over how they look. That’s because traditionally, photographers have full copyright when they capture an image of a celebrity, particularly in public. Now, a bunch of new lawsuits are taking ownership even further out of celebrity hands, with photographers and their agencies suing stars who dare to post paparazzi photos of themselves on their social media accounts without licensing them first.
There are plenty of celebs under fire at the moment, including LeBron James, Bella Hadid, and Dua Lipa. A few examples: Melrose Place and Real Housewives star Lisa Rinna posted on Instagram photos of herself that were taken by a paparazzo represented by the Backgrid agency; Backgrid is suing Rinna for copyright infringement. Rinna accuses Backgrid of “weaponizing” copyright law, while Backgrid retorts that once one of their paparazzi photos are posted without permission, magazines like People will be less likely to buy it because fans will have already seen it. Another case: model Gigi Hadid, who is being sued for copyright infringement by agency Xclusive-Lee over posting one of its images to Instagram. Hadid’s legal team asserts her post constitutes fair use because Hadid “creatively directed” the photo by choosing her outfit, posing and smiling, thus contributing “many of the elements that the copyright law seeks to protect.” Hadid also cropped the image when she posted it, which she says refocuses the photo on her pose and smile, rather than the photographer’s composition.
Model Emily Ratajowski recently settled a suit brought by a photographer over a photo he took of her walking outside of a flower shop, her face completely obscured by a bouquet she was carrying. Ratajowski posted the photo on an Instagram story with the text “MOOD FOREVER,” intending to convey how she feels like hiding from paparazzi. While the case settled, the judge indicated her text served as a commentary on the celebrity/paparazzi dynamic that may have amounted to transformative use, protecting her from a copyright claim.
This wasn’t Ratajkowski’s first battle with copyright law. She wrote a long essay on how it feels to be unable to control her image after a photographer took hundreds of nude photos of her early in her career, supposedly for a magazine editorial, and later published them as several books and showed them in a gallery exhibit — all without asking her permission or paying her. Ratajowski also had photos she posted to her Instagram account turned into “paintings” by renowned appropriation artist Richard Prince and sold for $80,000 each. She writes, “I have learned that my image, my reflection, is not my own.”
It’s easy to sympathize with the celebrities’ position. While mere mortals often scorn celebrity complaints about their lack of privacy and the invasiveness of paparazzi — “hey, it comes with the territory!” — it seems like adding insult to injury to allow paparazzi to take photos of celebrities against their will and then demand the celebs pay to use the photos themselves.
Also, it’s not hard to see why Ratajkowski or others might feel victimized by someone in a position of relative power profiting from images without sharing those profits. (For what it’s worth, a number of states do have laws against revenge porn, but that’s not what we’re talking about here.)
In that vein, in the wake of #metoo, the celebrities’ position is also appealing because it’s not hard to see it as trying to subvert the male gaze by allowing the (mostly) female celebrity subjects to at least profit from or assert some element of control over the pictures they appear in.
However, from an intellectual property law point of view, this is not how it works.
For starters, copyright law is really clear. The copyright for photos rests with the person who took the photo. Posing for a picture is not subject to copyright protection, and copyright law doesn’t give the subject of a photo rights to the copyright. This is because a copyright comes into existence when it is “fixed,” meaning recorded on a piece of film or a memory card — and those are owned by the photographer, not the subject.
Moreover, copyrights trump any publicity rights that celebrities have. Article 1, section 8, clause 8 of the U.S. Constitution says that Congress has the power to enact laws to “promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” This is because we as a society benefit from encouraging creators the right to create by allowing them to profit from their work. Celebrities and their lawyers would say that they too should be able to profit because they provided a service by appearing in the photograph and/or by being famous, and thus photoworthy. While the law isn’t supposed to get into judging the relative value of different artistic contributions, let’s be real: there is a difference between the creation of even a bad novel or artwork and smiling for a second into a camera lens on a step-and-repeat.
What’s more, in contrast to copyright law, the right of publicity is — at least for now — a product of state law. This means that under established law, if there’s a conflict between the rights of a copyright holder and the rights of a celebrity to control his or her image under the applicable right of publicity, the copyright holder’s interests come first.
This isn’t to say that this is the only policy balance that could be struck between the rights of the copyright holder and the rights of the subject of a photo, but it’s the one, for better or worse, that we currently have. So yes, the law is clear: if you’re a celeb, not only do you not profit from photos taken of you in public, if you want to use them yourself, you have to pay.
Also, look at it this way: none of us own everything about ourselves anymore (think about your personal data), nor do we profit from it. There’s no reason for the famous and the sort-of-famous to be different from everyone else.