September 16, 2025
Artificial Unintelligence Strikes Again
By Emily Poler
Will they never learn? Yet another chapter has opened in the ongoing follies of lawyers using AI. This time, it’s in a putative class action against OnlyFans’ parent company — Fenix International Ltd. — for, among other things, civil violations of the Racketeering and Corrupt Organizations (RICO) Act. Putting the merits of Plaintiffs’ claims aside, a very unwise and/or unfortunate contract attorney working for one of the law firms representing Plaintiffs used ChatGPT to generate four briefs filed in federal court in California. It turns out that those briefs contained dozens of AI-generated hallucinations, including 11 made-up citations (out of a total of 18) — fictional cases, rulings and precedents that are embarrassing the guilty attorneys and threatening to derail a potentially legitimate lawsuit.
Oops.
In case you don’t know, OnlyFans is an Internet platform in which individual creators get paid via subscriptions and pay-per-view for the video content they generate (yes, there’s a lot of porn on it). The suit was filed on behalf of OnlyFans users who allege the site employs professional “chatters” (including AI) to impersonate content creators in their interactions with users, without disclosing that users aren’t messaging with the actual personalities they pay to talk with (let’s not get into how this is a RICO violation).
Just by way of background, whenever I submit something to a court, either I or our paralegal goes through every citation, whether it’s to a case or to an underlying document, and makes sure (1) it says what we say it says, and (2) we have the citation and the name of the case right. Obviously, the point of this exercise is to avoid making misrepresentations and damaging our client’s case, while also embarrassing ourselves in front of a judge. Both are things that I really, really try to avoid. Also, one of my jobs as an attorney is to try to avoid calling attention to myself and, instead, keep the focus on the merits of my client’s arguments (or the lack of merits of the other side’s arguments).
Yet one of the firms representing Plaintiffs in the OnlyFans suit, Hagens Berman Sobol Shapiro LLP, seems to have not taken these basic steps. That firm hired a contract attorney as co-counsel. Apparently, she was dealing with a family crisis at the time and turned to AI as a shortcut in preparing the briefs. AI — predictably — generated all the errors and invented citations. As if that’s not bad enough, after Skadden Arps Slate Meagher & Flom LLP, the lawyers defending Fenix, discovered the issue and brought it to the court’s attention, Hagens Berman tried to explain it away rather than simply admitting their screwup and begging for forgiveness. As a result, the firm now finds itself in a fight to have the judge let them redo the AI-generated briefs, asserting the corrected briefs are essential to their case. Fenix, meanwhile, is seeking to dismiss, arguing there is no way to “correct” fictional citations, adding that Plaintiffs blew their chance to fight the dismissal by using the AI hallucinations in the first place.
A couple of issues worth highlighting. The attorney who used AI may have been under stress because of her personal problems, but that’s no excuse for her actions. It’s also a reminder that attorneys should never take on more work than they can handle, as it is a grave disservice to their clients and, ultimately, to their own reputation — and potentially their career.
Also, while this attorney was hired as an independent contractor by Hagens Berman, the actual materials submitted to the Court were signed by one of the firm’s attorneys without first checking her work. This is an absolute no-no. The fact that the contract attorney had reportedly done good work in the past doesn’t make it ok.
What is the proper punishment here? A hearing is set for September 25 to determine whether the court should issue sanctions. Regardless of any discipline meted out to the law firm, the real losers would ultimately be the Plaintiffs if their case is damaged or dismissed because of their attorneys’ negligence.
Stepping back from this specific case, is it time for broader standards on the use and disclosure of AI in legal work and court papers? For me, the larger question is, are specific rules necessary, or should the failure to catch wrong or false materials created by AI fall within existing codes of conduct mandating that attorneys provide the level of “legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation” of clients? In the OnlyFans case, the attorneys clearly did not not meet that standard, and while the issue of AI usage is novel, deception of courts is not.
While all this is being hashed out, some courts have already begun to require attorneys to disclose when they use AI. And the decision in this case, which is being watched by many in the legal community, may well set a precedent for how courts deal with AI hallucinations in the future.