Many are unaware of the fact that artificial intelligence (AI) has been used in the legal profession for nearly a decade to analyze data and documents. AI’s use is expanding to nearly all aspects of the legal profession, and today, it is utilized in a variety of ways for everyday routine tasks, which include document review, deposition and testimony summarization, legal research and legal writing. This work is “magically” completed with a click of a button. So…is it actually helpful, or is it an enemy in disguise?
Though there are many benefits offered to attorneys and judges by the use of AI—such as additional time to focus on strategy and liability analysis. AI technology also poses threats and has many flaws which endanger the legal field in its entirety. The dangers posed by AI technology include “bias, discrimination, and privacy concerns.”[i] In addition, AI has the ability to “create deep fake technology (images and videos of fake events) that can spread harmful misinformation and disinformation.”[ii] There is also the possibility, due to AI relying on “massive data sets”[iii], that private and confidential data and documentation is disclosed. In fact, “[t]here have already been class action lawsuits alleging privacy violations associated with generative AI tools.”[iv]
AI technology known Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) “uses an algorithm to predict the risk of a defendant committing another crime.”[v] This predictive modeling tool is aiding judges in making bail and sentencing decisions.[vi] Should we be worried about a future we’ve seen in the movie Minority Report? Do not fret. We have not gone dystopian yet. The legal world has discovered that COMPAS technology “could insert implicit racial bias into the process by relying on years of criminal justice system data as a source.”[vii] The biggest challenge faced in utilizing AI in the law will be avoiding bias and unethical use.
Another problem posed by AI use is the possibility of AI hallucinations. “Large language models have a documented tendency to ‘hallucinate,’ or make up false information.”[viii] “In one highly-publicized case, a New York lawyer faced sanctions for citing ChatGPT-invented fictional cases in a legal brief; many similar cases have since been reported.[ix] “[A] previous study of general-purpose chatbots found that they hallucinated between 59% and 82% of the time on legal queries, highlighting the risks of incorporating AI into legal practice.”[x]
The retrieval-augmented generation (RAG) proposed a “solution for reducing hallucinations in domain specific contexts.”[xi] In “a new preprint study by Standford RegLab and HAI researchers, …[the researchers]…put the claims of two providers, LexisNexis (creator of Lexis+ AI) and Thomson Reuters (creator of Westlaw AI-Assisted Research and Ask Practical Law AI)), to the test.”[xii] These tests provided that, even though there was improvement, these legal AI tools still “hallucinate an alarming amount of the time.”[xiii] The study “identif[ied] several challenges that [were] particularly unique to the RAG-based legal AI systems, causing hallucinations.”[xiv] These challenges were enumerated:
First, legal retrieval is hard. As any lawyer knows, finding the appropriate (or best) authority can be no easy task. Unlike other domains, the law is not entirely composed of verifiable facts—instead, law is built up over time by judges writing opinions.[xv]
Second, even when retrieval occurs, the document that is retrieved can be an inapplicable authority. In the American legal system, rules and precedents differ across jurisdictions and time periods; documents that might be relevant on their face due to semantic similarity to a query may actually be inapposite for idiosyncratic reasons that are unique to the law. Thus, we also observe hallucinations occurring when these RAG systems fail to identify the truly binding authority.[xvi]
Third, sycophancy—the tendency of AI to agree with the user’s incorrect assumptions—also poses unique risks in legal settings.[xvii]
The study results “highlight the need for “rigorous and transparent benchmarking of legal AI tools” [xviii] since hallucination problems have not been solved.
Ultimately, the common understanding is that without oversight, the AI-produced work product is questionable and unreliable. Hallucinations put lawyers’ work at risk, and therefore, humans need to review AI-generated work product.[xix] In fact “a growing number of lawyers and litigants have faced hefty fines and other sanctions for failing to verify AI-generated case citations and other material in court papers. Judges have removed lawyers from some cases or referred them for potential disciplinary action.”[xx]
At Tyson and Mendes, responsibility, reasonableness and common sense are always utilized in every case, even when employing AI technology. The checklist below will be helpful in verifying AI-generated content and also aids in meeting the standards of responsibility, reasonableness and common sense.
- Confidentiality: Safeguard all client information.
- Competency: Understand the technology, benefits, and risks.
- Client Consent: Obtain client permission and update engagement letters.
- Provide Minimal Data: Share only necessary information.
- Security: Use AI tools with robust encryption.
- Attorney-Client privilege: Preserve and do not jeopardize this privilege.
- Human verification: Verify all AI-generated content before use. AI should not replace legal research.[xxi]
Conclusion:
Despite the negative effects AI can have on the legal field, AI is here, and it is taking its place in nearly every industry in the world. AI has shortened research time and streamlined document review. It provides data trends to predict case outcomes and can assist in reducing litigation costs. The moral of the story is to take precautions with artificial intelligence and remember to utilize human intelligence alongside it.
Keep Reading
Sources
[i] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[ii] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[iii] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[iv] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[v] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[vi] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[vii] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[viii] Stanford University Human-Centered Artificial Intelligence, AI on Trial: Legal Models Hallucinate in 1 out of 6 (or More) Benchmarking Queries, (May 23, 2024), https://hai.stanford.edu/news/ai-trial-legal-models-hallucinate-1-out-6-or-more-benchmarking-queries
[ix] Id.
[x] Id.
[xi] Id.
[xii] Id.
[xiii] Id.
[xiv] Id.
[xv] Id.
[xvi] Id.
[xvii] Id.
[xviii] Id.
[xix] Bloomberg Law, How Is AI Changing the Legal Profession?, (May 23, 2024), https://pro.bloomberglaw.com/insights/technology/how-is-ai-changing-the-legal-profession/#how-technology-is-changing-the-legal-field
[xx] Merken, Sara, Senator warns US judges on AI misuse as courts try to adapt, Reuters,(October 28, 2025), https://www.reuters.com/legal/government/senator-warns-us-judges-ai-misuse-courts-try-adapt-2025-10-28/
[xxi] Turner, Kyle T., Getting Started with Generative AI in Legal Practice, Tennessee Bar Association, (July 31, 2025), https://www.tba.org/?pg=Articles&blAction=showEntry&blog-Entry=129028
We are Mansfield Rule Certified!
Author: Alexandra Noyes
Editor: Grace Shuman
The Life of an Appeal
Tyson & Mendes Saves Clients $785,663,510 in 2025
Is New York Getting a Bad Faith Bill?
The Apex Approach to Arguing Pain and Suffering: How to Stop Nuclear Verdicts®
What I’m Thankful For as a Lawyer
A Class in Class Actions: A Primer Inspired by Angelini v. TTI Outdoor Power Equipment, Inc.
Florida’s Tort Reform Era: The Impact of Nuclear Verdicts®
2025: An Industry-Changing Year
Georgia’s 2025 Tort Reform on Premises Liability: How Does it Apply to Slip and Fall Cases?