AI and the Law: Often Bad. Occasionally Good!

By Emily Poler

I’ve talked a lot here about the legal implications of AI, whether in copyright infringement lawsuits over its development or problems with how it’s been (mis)used by lawyers. The embarrassment and consequences when an attorney files an AI-drafted brief riddled with hallucinatory errors and false citations? Been there. Copyright infringement cases pending against OpenAI, Meta and other AI companies? Oh yes, we’ve done that. And none of this is ending anytime soon because, no matter how things shake out in the courtroom, one thing is certain: Artificial Intelligence is not going away. If anything, it’s going to become way more pervasive in our business, personal and, yes, legal lives. So with that in mind, let me talk about when, and in what contexts I see AI as a useful tool that can aid legal work — and where I think it’s a bad idea. 

Starting with the positives, AI can be great for writing, which doesn’t always come naturally to this human. It can provide a starting point I can then manually edit, which really speeds up getting started on writing tasks that, for whatever reason, I’d just rather avoid. AI is also very useful for repetitive tasks like formatting cumbersome documents like document requests and interrogatories, as well as responses to document requests and interrogatories. (If you’re not a litigator and don’t know what these are, don’t worry. It’s not that exciting.) When it comes to specific AI platforms, in my experience Claude is far better at these routine tasks than Co-Pilot, which could not format things consistently. Hardly surprising, since Co-Pilot is a Microsoft product and despite it now being the second quarter of the 21st century Microsoft still can’t seem to get its most basic product (Word) right, as it still inexplicably changes the formatting of documents without rhyme or reason. But I digress.

How else is AI useful for lawyers? I’ve seen that clients sometimes find AI-generated materials helpful or comforting when they are struggling to comprehend a legal concept. Instead of trying to get me on the phone, they can easily ask ChatGPT relevant questions and get quick answers. Litigation can be quite anxiety-ridden for a client, and if gaining a better understanding of what’s happening puts their minds at ease, fantastic. Of course, we have to keep the big caveat in mind: As everyone should know by now, AI-generated information is NOT always accurate.

Speaking of which, AI use is obviously a real problem when, for example, a lawyer’s time (and billing) is devoted to reviewing bogus legal citations that AI has magically created or when AI produces a case or a statute that says something that seems pertinent, but is provided without the full context and upon further review turns out to be irrelevant. Also, at least in my experience, none of the AI platforms are particularly good at telling when someone is lying or heavily shading the truth. If an adversary is blatantly presenting untrue “facts,” AI platforms — which work by analyzing what words go together — can’t necessarily tell the difference between truth and fiction. It also can’t account for human behavior which, you might have noticed, is sometimes weird and unpredictable. 

Time and time again, we see explicit and often embarrassing examples of why AI should not and cannot be trusted by lawyers for research. I’ve written about several cases where lawyers were humiliated and punished by judges for presenting briefs filled with AI-generated nonsense, sometimes digging themselves deeper holes with ridiculous excuses and justifications (here’s an excellent example). And yet, despite this, the use of AI to conduct legal analysis is becoming increasingly prevalent for those who work both inside and outside the legal field. It saves time, it saves money, it makes things easy, and as we know all too well, humans are always eager to overlook errors for the sake of convenience. But I will not get sucked into its wanton and irresponsible use. I might use it for routine and mechanical tasks, but whenever a situation requires critical thinking or multiple logical steps, I rely on hard work and human analysis and forgo the assisting “skills” of generative AI. 

One final note: Trachtman & Poler Law is a small firm. I am aware that BigLaw firms have developed their own AI platforms, and the data in these private AI platforms is, well, private. We don’t have that. There may be a time and a place where this is something we explore, but we’re not there yet.