Generative AI in Law: Growing Adoption, Growing Risks

LAWGenerative AI in Law: Growing Adoption, Growing Risks

The use of generative artificial intelligence in legal practice is growing, but new threats are emerging along with it—from large language model hallucinations to data-protection risks. Dr Agnieszka Skorupka, an attorney-at-law at SWPS University in Wrocław (Faculty of Law and Social Communication), explains why AI outputs should never be trusted uncritically.

Widespread access to large language models (LLMs) has changed how the legal community approaches digital technologies. A watershed moment was the launch of ChatGPT, which debuted in November 2022. Since then, many publicly available chatbots (free or paid) have appeared, giving virtually any user access to text analysis and text generation tools.

“Polish lawyers—without waiting for general recommendations, guidelines, or standards—regardless of whether they are attorneys, legal advisers, or judges, have begun introducing AI tools into their day-to-day practice to increase work efficiency. Lawyers use publicly available chatbots, among other things, to analyse documents, draft pleadings and opinions, translate texts, correct language and style, and search for case law,” notes Dr Agnieszka Skorupka, attorney-at-law at SWPS University in Wrocław.

Evidence of frequent AI use among lawyers is provided by Wolters Kluwer’s report Future Ready Lawyer 2024. According to the study, 76% of in-house legal professionals and 68% of law-firm lawyers use generative AI at least once a week (with 35% of in-house lawyers and 33% of law-firm lawyers doing so daily).¹ Dr Skorupka adds that although Future Ready Lawyer 2024 is not the newest study, it remains highly informative because it documents the scale of generative AI use at a time when these tools were only beginning to become widespread in the legal sector. Importantly, the use of AI tools in legal work is not limited to attorneys and legal advisers; it also extends to judges.

“According to recent media reports, an interesting example is a judge at the District Court in Brodnica who, during a hearing in a case concerning the ‘free credit sanction’ and a dispute over the correctness of the APR (RRSO) stated by a bank, used an AI chatbot (Gemini) to present the parties with an additional APR calculation.² The judge explained how the AI tool worked and what data were entered, emphasising that only abstract numerical values were used—without any identifying information about the parties—and noting that the aim was not to replace an expert opinion,”³ the expert recalls.

The scale of hallucinations in AI tools

Current AI models—especially LLMs—do not “think” or “understand” the world. This means they generate answers based on statistical patterns: they predict the probability of the next word in a sequence and do not truly comprehend content. Dr Skorupka notes that this operating principle is aptly captured by the concept of the “stochastic parrot,” introduced in 2021 by professors from the University of Washington.⁴ According to this concept, LLMs merely imitate human communication by stitching together fragments of text from training data into statistically plausible sequences—much like a parrot repeating overheard words without understanding their meaning.

Using generative AI tools involves a serious risk: hallucinations, i.e., the invention of non-existent information such as legal provisions, court rulings, or bibliographic references. In 2024, Stanford researchers showed that for popular LLMs answering legal queries, hallucinations occurred with a frequency ranging from 69% to 88% of cases.⁵

Meanwhile, legal-technology providers such as LexisNexis⁶ and Thomson Reuters⁷ claim they have significantly reduced—or even fully eliminated—the risk of hallucinations by using RAG (Retrieval-Augmented Generation). RAG “anchors” AI in reliable sources: instead of generating an answer purely from the model’s internal parameters, the system first searches a closed, curated database, retrieves relevant passages, and then produces a response based on that material.

Dr Skorupka points out, however, that findings from available research assessing RAG-based legal analysis tools are not as optimistic as vendor promises. While hallucination rates are lower than in general chatbots such as GPT-4, researchers from Stanford RegLab and Yale ISPS found in 2025 that specialised tools from LexisNexis and Thomson Reuters still fabricate information in roughly 17% to 33% of answers.⁸

“These studies show that we should not place unlimited trust in answers produced by generative AI models and that content delivered by these tools must be verified. Tomasz Prus, an expert in the law of new technologies, adds that there is no single standard for measuring hallucinations in AI models, because outcomes depend on how the question is phrased.⁹ I agree: ‘fighting hallucinations’ is not a one-off ‘problem to solve’ but an ongoing process of risk management in a lawyer’s work, which requires creating internal standards for using AI tools,” emphasises the SWPS University expert.

Risks and opportunities of AI in law

Another issue is privacy and confidentiality: entering sensitive client information or detailed court-file facts into public systems can expose them to leakage. Moreover, general-purpose models often fail to grasp the nuanced specifics of a local legal system and may rely on outdated or biased training data.

Dr Skorupka stresses that awareness of these risks requires law firms and in-house legal teams to develop internal standards for using AI tools. Professional self-governing bodies should also create industry standards. One example is the National Chamber of Legal Advisers (KIRP), which prepared recommendations in 2025 on the use of AI-based tools by legal advisers.¹⁰

Dr Skorupka also notes that the reliability of outputs depends critically on the quality of the data used to train a model—and that every piece of information and every source generated by a chatbot requires careful, independent verification by the lawyer.

“At present, the most valuable tools—those associated with lower hallucination risk—are chatbots that operate on verified sources and are tailored to the professional needs of lawyers. Consequently, regardless of how advanced a tool may be, the lawyer bears full and exclusive responsibility for the content and consequences of pleadings, documents, and opinions prepared with algorithmic support,” she concludes.


References

  1. Future Ready Lawyer 2024, Legal Innovations: Step into the Future or Fall Behind, Wolters Kluwer, 2024, p. 5, https://www.wolterskluwer.com/pl-pl/know/future-ready- lawyer-2024, accessed 5 Feb 2026.
  2. Annual Percentage Rate of Charge (APR; in Polish: RRSO – Rzeczywista Roczna Stopa Oprocentowania).
  3. https://serwisy.gazetaprawna.pl/orzeczenia/artykuly/10609233,sztuczna-inteligencja-juz-pomaga-w-postepowaniach-przed-sadem.html, accessed 5 Feb 2026.
  4. E. M. Bender, T. Gebru, A. McMillan-Major, S. Shmitchell, On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?, in: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2021), pp. 610–623, https://doi.org/10.1145/3442188.3445922.
  5. The researchers built a task set with varying complexity—from simple questions (e.g., identifying the author of a decision) to more complex ones (e.g., assessing the precedential relationship between two cases). They tested over 200,000 queries on three language models—GPT-3.5, Llama 2, and PaLM 2—across multiple dimensions (court level, task complexity, time period): https://hai.stanford.edu/news/hallucinating-law-legal-mistakes-large-language-models-are-pervasive, accessed 5 Feb 2026.
  6. LexisNexis Launches Lexis+ AI, a Generative AI Solution with Hallucination-Free Linked Legal Citations, https://www.lexisnexis.com/community/pressroom/b/news/posts/lexisnexis-launches-lexis-ai-a-generative-ai-solution-with-hallucination-free-linked-legal-citations, accessed 5 Feb 2026.
  7. J. Ju, Retrieval-augmented generation in legal tech, https://legal.thomsonreuters.com/blog/retrieval-augmented-generation-in-legal-tech/, accessed 5 Feb 2026.
  8. M. Dahl, V. Magesh, F. Surani, D. E. Ho, Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools, Journal of Empirical Legal Studies 2025, vol. 22, pp. 216–242, https://doi.org/10.1111/jels.12413.
  9. T. Prus, AI Hallucinations: A New Challenge for Lawyers, https://legalinn.pl/2025/12/22/halucynacje-ai-nowe-wyzwanie-dla-prawnikow/#:~:text=Niestety%2C%20modelom%20AI%20zdarza%20si%C4%99,odpowiedzi, accessed 5 Feb 2026.
  10. KIRP, KIRP Recommendations on the Use of Artificial Intelligence Tools by Legal Advisers, https://oirp.wroclaw.pl/rekomendacje-kirp-dotyczaca-korzystania-z-narzedzi-opartych-na-sztucznej-inteligencji-przez-radcow-prawnych, accessed 5 Feb 2026.

Source: https://ceo.com.pl/generatywna-ai-w-prawie-rosnie-uzycie-rosna-ryzyka-od-halucynacji-po-ochrone-danych-44595

Check out our other content
Related Articles
The Latest Articles