Copyright

Diddy and Netflix: Truth? Consequences?

By Emily Poler

Why kick back and watch a documentary as a normal person would — to pass some time, maybe learn a few things — when you can analyze its legal issues as an IP litigator? Welcome to my world, where I recently watched Sean Combs: The Reckoning on Netflix, a four-part series that angered its subject, the rapper/producer/felon aka Diddy, enough to have his attorneys send a cease and desist letter to the streaming platform. Having seen the show, I have thoughts on the dispute. 

The putative documentary, which premiered at the beginning of December, tells the story — or at least, a story — of how Diddy became one of the most powerful and successful men in hip-hop, his connection to the murders of rappers Tupac Shakur and Christopher Wallace (aka The Notorious B.I.G., aka Biggie Smalls), accusations of sexual assault and abuse, his trial for racketeering and sex trafficking, and his eventual imprisonment (Diddy is currently serving 50 months in jail). 

In the letter, Diddy’s lawyers air a number of grievances about the series, and we’ll get to those in a minute. First, though, let’s talk about what Diddy demands: That “Netflix cease and desist from the publication of the so-called ‘documentary’ titled Sean Combs: The Reckoning.” My thought: that’s never, ever going to happen. As Diddy’s lawyers presumably knew when they sent the letter, prior restraint of the media — government action to prevent speech before it actually happens — is really, really, really, disfavored. That means that there was a close-to-zero percent chance that a court would direct Netflix to not air the documentary; as a result, there was little reason for Netflix to pull the plug.

As indicated above, Diddy claims the series is less a documentary and more of a “hit piece,” crafted by his longtime nemesis, executive producer Curtis “50 Cent” Jackson, as the latest salvo in his “irrational fixation on destroying Mr. Combs’s reputation.” What’s more, Diddy accuses Netflix CEO Ted Sarandos of abetting 50 in retaliation for Diddy’s refusal a few years ago to participate in a Netflix series about his life because Sarandos refused to give Diddy creative control over the project. Diddy’s lawyers call the new doc “corporate retribution.” 

In a more specific accusation, and one that piqued my interest as an IP attorney, Diddy’s lawyers also claim that some of the footage of Diddy used in the series belongs to Diddy and was obtained in violation of contracts and copyright protections. 

Netflix, naturally, has denied all of Diddy’s allegations. Now, whether or not that is honest, given the streamer’s past failures to adequately vet projects prior to release (see here and here), I have no reason to think that Netflix verified the truth of the contents of the series, or their legal rights to materials included in it, before its release. What does that mean? If I had to guess, it means that if (when?) Diddy sues Netflix, the complaint might survive a motion to dismiss. This seems particularly likely given 50 Cent’s involvement and their long, antagonistic history together. 

So let’s talk about the accusations that some of the footage in the series was illegally obtained. For those of you who haven’t seen the documentary or any discussion of it on social media, the doc opens with video of Diddy on the phone with his lawyer and contains other footage of Diddy just prior to his criminal indictment. (The scene with Diddy literally taking a jacket off a fan’s back is so cringe.) It appears that this footage was filmed by a third party at Diddy’s behest, but, according to a Diddy spokesperson, “was created for an entirely different purpose, under an arrangement that was never completed, and no rights were ever transferred to Netflix.” The spokesperson implies there was a payment dispute between Diddy and whoever filmed him, and the footage ended up in the documentary producers’ hands without Diddy’s permission. Netflix responded claiming the footage was legally obtained, and presumably, in any lawsuit, they will also claim that their use of this footage was fair use and, therefore, non-infringing. 

Even if there was some form of confidentiality agreement between Diddy and the third party who shot the footage, can Diddy use that agreement to stop Netflix from streaming the documentary? Short answer: nope. Longer answer: If he could wield the confidentiality agreement as such, he would have already sought a temporary restraining order.

Taking a broader view of the dispute, are the allegations against DIddy in the documentary defamatory? Among other things, the documentary claims that Diddy had some responsibility for The Notorious B.I.G.’s death in a Los Angeles drive-by shooting, alleging that Diddy had brought Biggie to LA despite warnings he would be in danger, and kept him there instead of letting him leave for a trip to London. If those allegations are false, then Diddy potentially has a claim for defamation, but if they’re true, then the documentary’s allegations aren’t defamatory. As a result, assuming Diddy does sue Netflix and/or 50 Cent, it will be really interesting to see which statements he claims are false. 

And if Diddy does sue, this may be a situation where even if he wins, he loses, because he will have to produce evidence that the doc’s statements are false, while Netflix and/or the producers will get to counter with their own evidence. Moreover, because the purpose of defamation claims is to protect a person’s reputation and Diddy’s reputation is, ummm, pretty much in the toilet already, he could end up walking away with exactly zero dollars even if he was able to win on a defamation claim.

For now, the only shots fired are the letter and Netflix’s response, but if the matter does march on to a lawsuit, I’ll be back with updates. 

“Traditional Elements of Authorship:” A Tad Too Creative?

By Emily Poler

I previously wrote about the US Copyright Office’s policy on works created with AI and the decision in Thaler v. Perlmutter, which denied copyright registration for a work listing an AI platform as its sole author. In that post, I predicted we’ll soon see litigation over which elements of work created with AI can be copyrighted. 

While I’m pretty sure those suits will start to pop up, right now I want to talk about another case where the Copyright Office decided that a work created with AI was ineligible for copyright protection. This case, Allen v. Perlmutter, also raises some of the issues I noted in another recent post where I suggested it might be time to reconsider some of the policies underlying US copyright law in light of how much has changed since the US Constitution and the first copyright law were created in the 18th Century. 

The story: Jason Allen created an image titled Théâtre D’opéra Spatial using Midjourney AI and entered it in the 2022 Colorado State Fair’s annual fine art competition, where it won a prize. The US Copyright Office, however, was less impressed and denied his application for copyright protection, finding that it was created by Midjourney. Allen then filed suit challenging that decision. (Before diving in, two notes. One, H/T to Paul LiCalsi for pointing this case out to me. Two, in case you’re wondering, Shira Perlmutter, the defendant in both Thaler and Allen was, until recently, the Director of the US Copyright Office). 

Some background. To be eligible for a copyright, a work must be “original” and have an “author.” Of course, the law has long recognized that humans create copyrightable materials using machines all the time. In 1863’s Burrow-Giles Lithographic Co. v. Sarony, the Supreme Court found Napoleon Sarony’s photograph of Oscar Wilde was eligible for copyright protection, rejecting Plaintiff’s argument that photography is a mechanical process devoid of human authorship. The Court ruled that Sarony’s numerous creative choices in composing the photo meant he was the author of the work and, therefore, should be treated as such under the Copyright Act. Since then, courts, including the Supreme Court, have repeatedly held that only a minimal degree of creativity is required for something to be copyrighted. 

In this present case, Allen created his artwork by inputting many, many text prompts (over 600!!) into Midjourney to get the result he wanted out of the AI. Also, once he finished creating that initial image, he tweaked and upscaled it using additional software like Adobe Photoshop. The Copyright Office, nonetheless, denied registration for this work, finding that it lacked the “traditional elements of authorship” because Allen “did not paint, sketch, color, or otherwise fix…” any portion of the image.

However, as Allen’s attorney points out in his lawsuit, there is no legal definition of the “traditional elements of authorship” and, what’s more, creativity, not the actual labor of producing a work, is the hallmark of authorship under the Copyright Act. 

What to make of this case? Well, for starters, I’m curious to see the Copyright Office’s response regarding its narrow and archaic “traditional elements of authorship.” I imagine it’s going to be hard, if not impossible, to claim those can’t include use of a machine because, well, most everything that is obviously eligible for copyright protection in the 21st Century (music, movies, photography, etc.) uses hardware and software. Also, I wonder the extent to which some of the issues in this case reflect a basic uncertainty about how to characterize and appraise the skills (conceiving and refining detailed prompts) Allen employed to get Midjourney to create the work, compared to what we traditionally think of as visual art skills (painting and drawing). And, elaborating on that last point, how do we define creativity in light of all of the crude AI slop out there? (One example: check out the chair in this clip when the reporter retakes her seat.) Do we need to make some big decisions about what qualifies as helping “to promote the Progress of Science and useful Arts” (the purpose of the Copyright Act) by taking into account that some created work is good, borne of inspiration, purpose and ever-evolving skills, while a lot of stuff that gets made is just plain lazy, bad and crudely functional? Tough calls lie ahead.

AI: One Human has Some Questions

By Emily Poler

I’ve written a lot on this blog about the legal battles between copyright owners and the AI platforms that have used and continue to use copyrighted works to train their LLMs. However, I haven’t been terribly explicit about my views on what’s right and what’s wrong. Instead, I’ve focused on the parties’ legal maneuvers and what I see as the strengths and weaknesses in the various arguments and judges’ decisions, while also suggesting that existing case law can be extended to cover new technologies. This has been an intentional choice because I’m a lawyer and a litigator, not a policy maker. Therefore, I might not be the best person to opine on what’s “right” and what’s “wrong.” 

I do, however, wonder whether it is time to recalibrate our legal approach to some copyright issues. After all, U.S. copyright law traces its origins back to English common and statutory law from the 18th century, and it’s fair to say that things have changed A LOT since the days when George III wore the crown. 

So, given that everyone can use some light reading after the holiday weekend, I thought that with summer in the rearview (sigh), I’d wade into this thicket with a few thoughts and questions. 

In the main, I find the idea that companies like Anthropic, Google, Meta and OpenAI can mine a vast amount of content without compensating creators to be really problematic. The U.S. Constitution’s Copyright Clause (The Congress shall have Power . . .To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries”) is intended to incentivize creation of new works. The idea here is that society as a whole benefits from incentivising individual creators while fair use provides a mechanism to allow others to create new works using existing works and thus further benefit society. 

Fair use, which is what AI companies rely on in their arguments to allow them to mine copyrighted content, is disturbing to me in this context because it’s hard to believe in 2025 that any tech company is acting in the public interest or that its innovations will improve society at large. And so, my question here is, is any court capable of determining the potential societal benefit (or lack thereof) from a given innovation? It seems super hard because (1) long term benefits and downsides are difficult or impossible to predict, and (2) any one technology can have results both bad (Internet > social media) and good (Internet > not having to look at a paper map while driving).

I also have questions about how to properly classify what AI companies derive from copyrighted works. The companies argue that their training models are taking only non-expressive information — how words and other information are arranged — from human-created materials, and not expressive content — the meaning of the words and information. In other words, they claim an LLM scanning a scholarly work on who authored Shakespeare’s plays is only using the words and sentences to learn how humans think and communicate, and not actually paying attention to (and potentially ripping off) the author’s arguments that Christopher Marlowe is the true creator of Romeo and Juliet.

But can we really make that distinction? The way I arrange words in this blog post is, in fact, central to the idea that I’m expressing. By way of comparison, the argument that how human authors arrange words is “non-expressive” might be akin to saying that Death of a Salesman read by a monotone, computer-generated voice is the same as performed by an actor with years of training. I, for one, have a hard time buying that.

Furthermore, the role of corporations has changed dramatically since Parliament passed the first copyright statute — the Statute of Anne — in 1710. This makes me wonder if it’s time to consider whether copyright law should distinguish between incentives for companies to create works, and incentives for individuals to create. 

Obviously, these are all big questions that in one way or another are currently being touched upon in the courts. But what all my questions come down to is, are the courts really who should be answering them? I worry that without a broader, societal examination of how copyright law should be applied to AI, as opposed to narrow courtroom applications of centuries old laws and principles to previously unimaginable technologies, we will get results that only benefit large companies while hurting individual creators and, ultimately, society as a whole — which would be the exact opposite of what copyright law was created to achieve. 

Judge Lets Llama March Forward

By Emily Poler

As I noted in my previous post, there have been two recent decisions involving fair use and AI. Last time around, I wrote about the case brought by a group of authors against Anthropic. This time, we turn to the other case where there was a recent decision — Kadrey v. Meta Platforms, Inc. — which was also brought by a number of writers. 

To cut to the chase, in Kadrey (a/k/a the Meta case), the judge granted Meta’s motion for summary judgment on the issue of fair use, finding that Meta’s use of Plaintiff’s copyrighted works to train its large language model (LLM) — Llama — was highly transformative and, therefore, Meta’s processing of Plaintiff’s work was fair use.

As in the Anthropic case, Meta ingested millions of digitized books to train Llama. It obtained these books from online “shadow libraries” (i.e., pirated copies). Unlike the judge in the Anthropic case who found fair use only where Anthropic paid for the initial copy and actually used the works in developing its LLM, the judge in the Meta case was unfazed by Meta’s use of pirated works. According to the judge in the Meta case, the fact that Meta started with pirated works was irrelevant so long as the ultimate use was transformative. 

In other words, the ends justify the means, which seems like a pretty novel way of deciding fair use issues. 

Also of interest: the judge in the Meta case didn’t spend any time discussing the exact nature of Meta’s use. Instead, he assumed that the use was Llama itself. This stands in pretty sharp contrast to the judge in Anthropic who spent quite a bit of time looking at the various ways Anthropic used and stored the downloaded books. This seems not great because the intermediate steps (downloading, storing, cataloging, and making pirated works available to internal users) represent copyright infringement on their own. The court here, however, largely glossed over these issues because all of the “downloads the plaintiffs identify had the ultimate purpose of LLM training.” 

With that said, the judge in the Meta case did invite other authors to bring a second case against Meta and provided those putative plaintiffs with a roadmap of the evidence that would support a ruling that Meta’s use of their works was not fair use. Here, the judge suggested a novel way of looking at the fourth fair use factor, which focuses on market impact, proposing that indirect substitution could weigh against a finding of fair use. That is, the judge said a court could consider whether someone, for example, buying a romance novel generated by Llama substitutes for a similar novel written by a human, here writing that other plaintiffs could pursue “the potentially winning argument — that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution.” 

While this certainly has some appeal, it also seems a little unwieldy. Does everyone who writes a romance novel get to point to content generated by LLMs and say that’s substituting for their work? What happens if a book spans genres? How much of a market impact is required? 

Overall, the cases against Anthropic and Meta represent some pretty big wins for AI platforms at least as far as copyright infringement goes. However, there are still plenty of areas of uncertainty that should keep things very interesting as these cases march on. 

The Anthropic Decision: A (Sorta) Win for AI

By Emily Poler

Two recent court decisions are starting to provide some clarity about when AI companies can incorporate copyrighted works into their large language models (LLMs) without licenses from the copyright holders. One is in a suit against Meta; we’ll get to that in a future post. 

Today, let’s focus on the suit brought by a group of authors against Anthropic PBC, the company behind Claude, a ChatGPT and CoPilot competitor. (For what it’s worth, I’ve found Claude to be the best AI of the three). Bottom line: “The training use was a fair use,” wrote Judge William Alsup. “The use of the books at issue to train Claude and its precursors was exceedingly transformative.” This ruling is a landmark as it’s one of the first substantive decisions on how fair use applies to AI — and it’s a big win for AI, right? Well, there’s a catch.

But first, some background. To create Claude (I love how AI companies give their LLMs these friendly, teddy bear names that mask that they’re machines and cause real harm), Anthropic collected a library of approximately seven million books. In some cases, Anthropic purchased hard copies and scanned them. But, mostly it just grabbed “free” (aka, pirated) digital copies from the Internet. At least three authors whose books were used — Andrea Bartz, Charles Graeber and Kirk Wallace Johnson — were not amused, and in 2024 they filed a class action suit against Anthropic, alleging copyright infringement for training Claude on their works and for obtaining the materials without paying for them. 

As far as Anthropic’s training of its LLM on copyrighted materials, the Court found this to be fair use since it dramatically differs from the works’ original purpose. As the judge wrote, “the technology at issue was among the most transformative many of us will see in our lifetimes.” This is a big deal.

But what’s also a big deal — and the catch for Anthropic — is that if you’re going to train an AI on copyrighted materials, you have to pay for them. In most cases, Anthropic didn’t. And thus, Judge Alsup is allowing the case to proceed to trial, writing that Anthropic “downloaded for free millions of copyrighted books in digital form from pirate sites on the internet.” 

For me, there are a couple of notable takeaways here, some purely legal and some the kind of common sense that I suspect that most kindergartners could point out. Let’s talk about the purely legal point first. The Court went to great lengths to distinguish the different ways that Anthropic used the works, which was critical in its fair use analysis. 

As part of Anthropic’s process, when it scanned a purchased book it discarded the original copy. The Court found this constituted fair use as long as the hard copy was destroyed and the digitized version not distributed outside the company. However, Anthropic kept all the books, including the millions of pirated copies, in a general library even after deciding, in some cases, that some books in this library would not be used for training now, or maybe ever. The judge specifically noted this implied the company’s primary purpose was to amass a vast library without paying for it, regardless of whether it might someday be used for a transformative purpose, and that such a practice directly displaced legitimate demand for the authors’ works. 

The opinion is especially interesting to me because of how the Court distinguished the facts of this case from other fair use cases. For example, the Court pointed out that in most (if not all) of other fair use cases, the defendant purchased or obtained the initial copy legally by either purchasing it or using a library copy. 

This brings us to the other big takeaway, which is a mix of legal reasoning combined with morals and common sense: A defendant doesn’t get a free pass on stealing copyrighted materials just because it does something neat with those materials. In his opinion, the judge consistently ruled that it’s not ok to pirate books. This should have been obvious to Anthropic (and its lawyers) as I think that most children could tell you doing something cool or interesting with the proceeds of a bank robbery doesn’t make the bank robbery legal. This is particularly true given that Anthropic’s whole marketing schtick is that it’s less evil than other technology companies. In fact, Anthropic’s lawyers seemed to acknowledge as much at oral argument, saying “You can’t just bless yourself by saying I have a research purpose and, therefore, go and take any textbook you want. That would destroy the academic publishing market if that were the case.” 

It will be fascinating to see what happens in the trial, slated to start in December. If judgement for copyright goes against Anthropic, U.S. copyright law allows for statutory damages of up to $150,000 per infringed work. With more than seven million pirated books in Anthropic’s library, the damages could be huge.

Also huge, of course, is the precedent set here that training AI on copyrighted works is fair use. It’s a significant decision that many have been waiting for that will have enormous repercussions on, well, just about everything going forward.

Stay tuned. More to come soon on the suit against LLAMA, Meta’s LLM.