Why Lawyers Keep Using ChatGPT (Even When It Gets Them in Trouble)

Date published
June 1, 2025

Why Lawyers Keep Using ChatGPT (Even When It Gets Them in Trouble)

Every few weeks, another story breaks about a lawyer submitting court documents filled with fake case law. The common thread? Those filings were drafted with help from a large language model like ChatGPT, which had confidently made up legal citations. These AI-generated hallucinations can lead to embarrassment, court sanctions, or even fines. And yet, lawyers keep using AI tools. Why?

The answer lies in the pressure lawyers face and how quickly AI has become a part of everyday work. Legal research databases like Westlaw and LexisNexis now include AI features. For attorneys juggling demanding caseloads, tools like ChatGPT seem like fast and cost-effective assistants. Most are not handing their filings over to AI entirely. They use it for tasks like summarizing cases, drafting outlines, or brainstorming legal arguments. But that is where problems start. Many attorneys do not fully understand how AI works or how likely it is to hallucinate. One lawyer who was sanctioned in 2023 admitted he thought ChatGPT was just a very powerful search engine.

image

Andrew Perlman, dean of Suffolk University Law School, says the headlines do not show the full picture. Most lawyers are using AI responsibly. The ones who get caught are the exception. Perlman believes the potential upside of AI far outweighs the risks. He points out that Westlaw and other tools already use AI for reviewing filings and sifting through large volumes of discovery material.

Lawyers seem to agree. According to a 2024 survey by Thomson Reuters, 63 percent of attorneys had used AI tools and 12 percent said they use them regularly. Most reported using AI to summarize case law or research statutes and sample legal language. Half of those surveyed said exploring AI was now their top work priority. One respondent put it simply. The role of a good lawyer is as a trusted advisor, not just a document producer.

Still, those documents matter in court. In a recent high-profile case, lawyers for journalist Tim Burke submitted a motion that was later thrown out because it contained nine hallucinated citations. One of the lawyers, Mark Rasch, said he used ChatGPT’s deep research feature along with Westlaw’s AI. He later took full responsibility for the mistakes.

Rasch is not alone. Lawyers representing Anthropic admitted they used Claude AI to help write a court filing. That document included a citation with the wrong title and incorrect authors. In another case, a professor who backed a Minnesota law on deepfakes used ChatGPT to help organize citations. His declaration included hallucinated case law and incorrect author names.

Judges are growing less forgiving. In a lawsuit against State Farm, a California judge said he was initially persuaded by a filing. But after checking the sources, he discovered they were completely made up. He wrote that he was intrigued by the cases cited and went to look them up, only to find they did not exist.

Perlman says generative AI can still help lawyers do their work better, faster, and at lower cost. He sees it being useful in nearly every task, from reviewing documents to brainstorming opposing arguments. But lawyers still have to check what the tools produce. That was true even before AI. Lawyers under pressure often misquote or misuse cases simply because they are rushed. At least back then, the cases usually existed.

A bigger problem may be misplaced trust. AI responses can look polished and convincing at first glance. That makes it easy for someone to believe the answer is right without checking.

Arizona lawmaker and election attorney Alexander Kolodin says he treats ChatGPT like a junior associate. He has used it to write legislation and first drafts of amendments. In 2024, he used it to define deepfakes in a bill that later passed into law. He said he added the legal protections himself. To avoid hallucinations, he always checks the citations.

You do not just send out a junior associate’s work without verifying the sources, Kolodin said. A person can misread a case or cite the wrong point. The same standard applies with AI.

Kolodin uses both ChatGPT and LexisNexis AI tools. In his experience, LexisNexis has more hallucinations, while ChatGPT’s rate has dropped over the past year.

AI use is now so widespread that in 2024, the American Bar Association issued its first guidelines for using AI tools. The guidance reminds attorneys that they have a duty to maintain technological competence. That includes understanding the risks and limitations of generative AI. Lawyers should know how these tools work, think twice before inputting sensitive client data, and consider telling clients when AI is being used in their casework.

Perlman believes AI will eventually become essential to law practice. At some point, he says, people will stop questioning the lawyers who use AI and start questioning the ones who do not.

Not everyone agrees. Judge Michael Wilner, who sanctioned lawyers for submitting a filing full of hallucinated citations, put it bluntly. Even with recent advances, no reasonably competent attorney should outsource legal research and writing to AI without verifying the output.