AI

AI and the Law: Often Bad. Occasionally Good!

By Emily Poler

I’ve talked a lot here about the legal implications of AI, whether in copyright infringement lawsuits over its development or problems with how it’s been (mis)used by lawyers. The embarrassment and consequences when an attorney files an AI-drafted brief riddled with hallucinatory errors and false citations? Been there. Copyright infringement cases pending against OpenAI, Meta and other AI companies? Oh yes, we’ve done that. And none of this is ending anytime soon because, no matter how things shake out in the courtroom, one thing is certain: Artificial Intelligence is not going away. If anything, it’s going to become way more pervasive in our business, personal and, yes, legal lives. So with that in mind, let me talk about when, and in what contexts I see AI as a useful tool that can aid legal work — and where I think it’s a bad idea. 

Starting with the positives, AI can be great for writing, which doesn’t always come naturally to this human. It can provide a starting point I can then manually edit, which really speeds up getting started on writing tasks that, for whatever reason, I’d just rather avoid. AI is also very useful for repetitive tasks like formatting cumbersome documents like document requests and interrogatories, as well as responses to document requests and interrogatories. (If you’re not a litigator and don’t know what these are, don’t worry. It’s not that exciting.) When it comes to specific AI platforms, in my experience Claude is far better at these routine tasks than Co-Pilot, which could not format things consistently. Hardly surprising, since Co-Pilot is a Microsoft product and despite it now being the second quarter of the 21st century Microsoft still can’t seem to get its most basic product (Word) right, as it still inexplicably changes the formatting of documents without rhyme or reason. But I digress.

How else is AI useful for lawyers? I’ve seen that clients sometimes find AI-generated materials helpful or comforting when they are struggling to comprehend a legal concept. Instead of trying to get me on the phone, they can easily ask ChatGPT relevant questions and get quick answers. Litigation can be quite anxiety-ridden for a client, and if gaining a better understanding of what’s happening puts their minds at ease, fantastic. Of course, we have to keep the big caveat in mind: As everyone should know by now, AI-generated information is NOT always accurate.

Speaking of which, AI use is obviously a real problem when, for example, a lawyer’s time (and billing) is devoted to reviewing bogus legal citations that AI has magically created or when AI produces a case or a statute that says something that seems pertinent, but is provided without the full context and upon further review turns out to be irrelevant. Also, at least in my experience, none of the AI platforms are particularly good at telling when someone is lying or heavily shading the truth. If an adversary is blatantly presenting untrue “facts,” AI platforms — which work by analyzing what words go together — can’t necessarily tell the difference between truth and fiction. It also can’t account for human behavior which, you might have noticed, is sometimes weird and unpredictable. 

Time and time again, we see explicit and often embarrassing examples of why AI should not and cannot be trusted by lawyers for research. I’ve written about several cases where lawyers were humiliated and punished by judges for presenting briefs filled with AI-generated nonsense, sometimes digging themselves deeper holes with ridiculous excuses and justifications (here’s an excellent example). And yet, despite this, the use of AI to conduct legal analysis is becoming increasingly prevalent for those who work both inside and outside the legal field. It saves time, it saves money, it makes things easy, and as we know all too well, humans are always eager to overlook errors for the sake of convenience. But I will not get sucked into its wanton and irresponsible use. I might use it for routine and mechanical tasks, but whenever a situation requires critical thinking or multiple logical steps, I rely on hard work and human analysis and forgo the assisting “skills” of generative AI. 

One final note: Trachtman & Poler Law is a small firm. I am aware that BigLaw firms have developed their own AI platforms, and the data in these private AI platforms is, well, private. We don’t have that. There may be a time and a place where this is something we explore, but we’re not there yet.

“Traditional Elements of Authorship:” A Tad Too Creative?

By Emily Poler

I previously wrote about the US Copyright Office’s policy on works created with AI and the decision in Thaler v. Perlmutter, which denied copyright registration for a work listing an AI platform as its sole author. In that post, I predicted we’ll soon see litigation over which elements of work created with AI can be copyrighted. 

While I’m pretty sure those suits will start to pop up, right now I want to talk about another case where the Copyright Office decided that a work created with AI was ineligible for copyright protection. This case, Allen v. Perlmutter, also raises some of the issues I noted in another recent post where I suggested it might be time to reconsider some of the policies underlying US copyright law in light of how much has changed since the US Constitution and the first copyright law were created in the 18th Century. 

The story: Jason Allen created an image titled Théâtre D’opéra Spatial using Midjourney AI and entered it in the 2022 Colorado State Fair’s annual fine art competition, where it won a prize. The US Copyright Office, however, was less impressed and denied his application for copyright protection, finding that it was created by Midjourney. Allen then filed suit challenging that decision. (Before diving in, two notes. One, H/T to Paul LiCalsi for pointing this case out to me. Two, in case you’re wondering, Shira Perlmutter, the defendant in both Thaler and Allen was, until recently, the Director of the US Copyright Office). 

Some background. To be eligible for a copyright, a work must be “original” and have an “author.” Of course, the law has long recognized that humans create copyrightable materials using machines all the time. In 1863’s Burrow-Giles Lithographic Co. v. Sarony, the Supreme Court found Napoleon Sarony’s photograph of Oscar Wilde was eligible for copyright protection, rejecting Plaintiff’s argument that photography is a mechanical process devoid of human authorship. The Court ruled that Sarony’s numerous creative choices in composing the photo meant he was the author of the work and, therefore, should be treated as such under the Copyright Act. Since then, courts, including the Supreme Court, have repeatedly held that only a minimal degree of creativity is required for something to be copyrighted. 

In this present case, Allen created his artwork by inputting many, many text prompts (over 600!!) into Midjourney to get the result he wanted out of the AI. Also, once he finished creating that initial image, he tweaked and upscaled it using additional software like Adobe Photoshop. The Copyright Office, nonetheless, denied registration for this work, finding that it lacked the “traditional elements of authorship” because Allen “did not paint, sketch, color, or otherwise fix…” any portion of the image.

However, as Allen’s attorney points out in his lawsuit, there is no legal definition of the “traditional elements of authorship” and, what’s more, creativity, not the actual labor of producing a work, is the hallmark of authorship under the Copyright Act. 

What to make of this case? Well, for starters, I’m curious to see the Copyright Office’s response regarding its narrow and archaic “traditional elements of authorship.” I imagine it’s going to be hard, if not impossible, to claim those can’t include use of a machine because, well, most everything that is obviously eligible for copyright protection in the 21st Century (music, movies, photography, etc.) uses hardware and software. Also, I wonder the extent to which some of the issues in this case reflect a basic uncertainty about how to characterize and appraise the skills (conceiving and refining detailed prompts) Allen employed to get Midjourney to create the work, compared to what we traditionally think of as visual art skills (painting and drawing). And, elaborating on that last point, how do we define creativity in light of all of the crude AI slop out there? (One example: check out the chair in this clip when the reporter retakes her seat.) Do we need to make some big decisions about what qualifies as helping “to promote the Progress of Science and useful Arts” (the purpose of the Copyright Act) by taking into account that some created work is good, borne of inspiration, purpose and ever-evolving skills, while a lot of stuff that gets made is just plain lazy, bad and crudely functional? Tough calls lie ahead.

Artificial Unintelligence Strikes Again

By Emily Poler

Will they never learn? Yet another chapter has opened in the ongoing follies of lawyers using AI. This time, it’s in a putative class action against OnlyFans’ parent company — Fenix International Ltd. — for, among other things, civil violations of the Racketeering and Corrupt Organizations (RICO) Act. Putting the merits of Plaintiffs’ claims aside, a very unwise and/or unfortunate contract attorney working for one of the law firms representing Plaintiffs used ChatGPT to generate four briefs filed in federal court in California. It turns out that those briefs contained dozens of AI-generated hallucinations, including 11 made-up citations (out of a total of 18) — fictional cases, rulings and precedents that are embarrassing the guilty attorneys and threatening to derail a potentially legitimate lawsuit.

Oops.

In case you don’t know, OnlyFans is an Internet platform in which individual creators get paid via subscriptions and pay-per-view for the video content they generate (yes, there’s a lot of porn on it). The suit was filed on behalf of OnlyFans users who allege the site employs professional “chatters” (including AI) to impersonate content creators in their interactions with users, without disclosing that users aren’t messaging with the actual personalities they pay to talk with (let’s not get into how this is a RICO violation). 

Just by way of background, whenever I submit something to a court, either I or our paralegal goes through every citation, whether it’s to a case or to an underlying document, and makes sure (1) it says what we say it says, and (2) we have the citation and the name of the case right. Obviously, the point of this exercise is to avoid making misrepresentations and damaging our client’s case, while also embarrassing ourselves in front of a judge. Both are things that I really, really try to avoid. Also, one of my jobs as an attorney is to try to avoid calling attention to myself and, instead, keep the focus on the merits of my client’s arguments (or the lack of merits of the other side’s arguments). 

Yet one of the firms representing Plaintiffs in the OnlyFans suit, Hagens Berman Sobol Shapiro LLP, seems to have not taken these basic steps. That firm hired a contract attorney as co-counsel. Apparently, she was dealing with a family crisis at the time and turned to AI as a shortcut in preparing the briefs. AI — predictably — generated all the errors and invented citations. As if that’s not bad enough, after Skadden Arps Slate Meagher & Flom LLP, the lawyers defending Fenix, discovered the issue and brought it to the court’s attention, Hagens Berman tried to explain it away rather than simply admitting their screwup and begging for forgiveness. As a result, the firm now finds itself in a fight to have the judge let them redo the AI-generated briefs, asserting the corrected briefs are essential to their case. Fenix, meanwhile, is seeking to dismiss, arguing there is no way to “correct” fictional citations, adding that Plaintiffs blew their chance to fight the dismissal by using the AI hallucinations in the first place. 

A couple of issues worth highlighting. The attorney who used AI may have been under stress because of her personal problems, but that’s no excuse for her actions. It’s also a reminder that attorneys should never take on more work than they can handle, as it is a grave disservice to their clients and, ultimately, to their own reputation — and potentially their career. 

Also, while this attorney was hired as an independent contractor by Hagens Berman, the actual materials submitted to the Court were signed by one of the firm’s attorneys without first checking her work. This is an absolute no-no. The fact that the contract attorney had reportedly done good work in the past doesn’t make it ok. 

What is the proper punishment here? A hearing is set for September 25 to determine whether the court should issue sanctions. Regardless of any discipline meted out to the law firm, the real losers would ultimately be the Plaintiffs if their case is damaged or dismissed because of their attorneys’ negligence. 

Stepping back from this specific case, is it time for broader standards on the use and disclosure of AI in legal work and court papers? For me, the larger question is, are specific rules necessary, or should the failure to catch wrong or false materials created by AI fall within existing codes of conduct mandating that attorneys provide the level of “legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation” of clients? In the OnlyFans case, the attorneys clearly did not not meet that standard, and while the issue of AI usage is novel, deception of courts is not. 

While all this is being hashed out, some courts have already begun to require attorneys to disclose when they use AI. And the decision in this case, which is being watched by many in the legal community, may well set a precedent for how courts deal with AI hallucinations in the future. 

AI: One Human has Some Questions

By Emily Poler

I’ve written a lot on this blog about the legal battles between copyright owners and the AI platforms that have used and continue to use copyrighted works to train their LLMs. However, I haven’t been terribly explicit about my views on what’s right and what’s wrong. Instead, I’ve focused on the parties’ legal maneuvers and what I see as the strengths and weaknesses in the various arguments and judges’ decisions, while also suggesting that existing case law can be extended to cover new technologies. This has been an intentional choice because I’m a lawyer and a litigator, not a policy maker. Therefore, I might not be the best person to opine on what’s “right” and what’s “wrong.” 

I do, however, wonder whether it is time to recalibrate our legal approach to some copyright issues. After all, U.S. copyright law traces its origins back to English common and statutory law from the 18th century, and it’s fair to say that things have changed A LOT since the days when George III wore the crown. 

So, given that everyone can use some light reading after the holiday weekend, I thought that with summer in the rearview (sigh), I’d wade into this thicket with a few thoughts and questions. 

In the main, I find the idea that companies like Anthropic, Google, Meta and OpenAI can mine a vast amount of content without compensating creators to be really problematic. The U.S. Constitution’s Copyright Clause (The Congress shall have Power . . .To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries”) is intended to incentivize creation of new works. The idea here is that society as a whole benefits from incentivising individual creators while fair use provides a mechanism to allow others to create new works using existing works and thus further benefit society. 

Fair use, which is what AI companies rely on in their arguments to allow them to mine copyrighted content, is disturbing to me in this context because it’s hard to believe in 2025 that any tech company is acting in the public interest or that its innovations will improve society at large. And so, my question here is, is any court capable of determining the potential societal benefit (or lack thereof) from a given innovation? It seems super hard because (1) long term benefits and downsides are difficult or impossible to predict, and (2) any one technology can have results both bad (Internet > social media) and good (Internet > not having to look at a paper map while driving).

I also have questions about how to properly classify what AI companies derive from copyrighted works. The companies argue that their training models are taking only non-expressive information — how words and other information are arranged — from human-created materials, and not expressive content — the meaning of the words and information. In other words, they claim an LLM scanning a scholarly work on who authored Shakespeare’s plays is only using the words and sentences to learn how humans think and communicate, and not actually paying attention to (and potentially ripping off) the author’s arguments that Christopher Marlowe is the true creator of Romeo and Juliet.

But can we really make that distinction? The way I arrange words in this blog post is, in fact, central to the idea that I’m expressing. By way of comparison, the argument that how human authors arrange words is “non-expressive” might be akin to saying that Death of a Salesman read by a monotone, computer-generated voice is the same as performed by an actor with years of training. I, for one, have a hard time buying that.

Furthermore, the role of corporations has changed dramatically since Parliament passed the first copyright statute — the Statute of Anne — in 1710. This makes me wonder if it’s time to consider whether copyright law should distinguish between incentives for companies to create works, and incentives for individuals to create. 

Obviously, these are all big questions that in one way or another are currently being touched upon in the courts. But what all my questions come down to is, are the courts really who should be answering them? I worry that without a broader, societal examination of how copyright law should be applied to AI, as opposed to narrow courtroom applications of centuries old laws and principles to previously unimaginable technologies, we will get results that only benefit large companies while hurting individual creators and, ultimately, society as a whole — which would be the exact opposite of what copyright law was created to achieve. 

Judge Lets Llama March Forward

By Emily Poler

As I noted in my previous post, there have been two recent decisions involving fair use and AI. Last time around, I wrote about the case brought by a group of authors against Anthropic. This time, we turn to the other case where there was a recent decision — Kadrey v. Meta Platforms, Inc. — which was also brought by a number of writers. 

To cut to the chase, in Kadrey (a/k/a the Meta case), the judge granted Meta’s motion for summary judgment on the issue of fair use, finding that Meta’s use of Plaintiff’s copyrighted works to train its large language model (LLM) — Llama — was highly transformative and, therefore, Meta’s processing of Plaintiff’s work was fair use.

As in the Anthropic case, Meta ingested millions of digitized books to train Llama. It obtained these books from online “shadow libraries” (i.e., pirated copies). Unlike the judge in the Anthropic case who found fair use only where Anthropic paid for the initial copy and actually used the works in developing its LLM, the judge in the Meta case was unfazed by Meta’s use of pirated works. According to the judge in the Meta case, the fact that Meta started with pirated works was irrelevant so long as the ultimate use was transformative. 

In other words, the ends justify the means, which seems like a pretty novel way of deciding fair use issues. 

Also of interest: the judge in the Meta case didn’t spend any time discussing the exact nature of Meta’s use. Instead, he assumed that the use was Llama itself. This stands in pretty sharp contrast to the judge in Anthropic who spent quite a bit of time looking at the various ways Anthropic used and stored the downloaded books. This seems not great because the intermediate steps (downloading, storing, cataloging, and making pirated works available to internal users) represent copyright infringement on their own. The court here, however, largely glossed over these issues because all of the “downloads the plaintiffs identify had the ultimate purpose of LLM training.” 

With that said, the judge in the Meta case did invite other authors to bring a second case against Meta and provided those putative plaintiffs with a roadmap of the evidence that would support a ruling that Meta’s use of their works was not fair use. Here, the judge suggested a novel way of looking at the fourth fair use factor, which focuses on market impact, proposing that indirect substitution could weigh against a finding of fair use. That is, the judge said a court could consider whether someone, for example, buying a romance novel generated by Llama substitutes for a similar novel written by a human, here writing that other plaintiffs could pursue “the potentially winning argument — that Meta has copied their works to create a product that will likely flood the market with similar works, causing market dilution.” 

While this certainly has some appeal, it also seems a little unwieldy. Does everyone who writes a romance novel get to point to content generated by LLMs and say that’s substituting for their work? What happens if a book spans genres? How much of a market impact is required? 

Overall, the cases against Anthropic and Meta represent some pretty big wins for AI platforms at least as far as copyright infringement goes. However, there are still plenty of areas of uncertainty that should keep things very interesting as these cases march on.