Back to articles

Hallucinated justice:  When AI gets lawyers into trouble

by Michał Wiewióra

In 2023, a New York lawyer made headlines not for his eloquence or courtroom triumph but for citing fictional US Supreme Court decisions generated by ChatGPT. The consequences were severe: disciplinary proceedings, a fine, public embarrassment, and a warning to the legal profession worldwide.[1] This incident exemplifies the peril of using AI in litigation.

Poland’s guidebook for legal advisers using AI

As AI tools become increasingly integrated into the legal workflow (drafting motions, summarising documents, and predicting outcomes), litigators must balance efficiency with accountability. In Poland, the National Chamber of Legal Advisers has responded with comprehensive recommendations, urging lawyers to embrace AI cautiously and ethically.[2]

According to the 2025 publication AI in the Work of an Attorney-at-law, generative AI can significantly enhance legal productivity: summarising case law, preparing reports, assisting with translations, and even generating preliminary contract drafts. AI tools promise time savings and scalability, especially for solo practitioners and small firms.

Risks and prevention

However, the same report highlights serious risks. Chief among them is the phenomenon of “AI hallucinations”: plausible-sounding but fabricated content. When a legal brief contains invented citations, the lawyer, not the machine, bears responsibility. Ethical rules are unambiguous: all filings must be verified, and reliance on AI cannot absolve a lawyer from due diligence. 

Three useful key principles have been highlighted by Poland’s National Chamber of Legal Advisers in using AI:

  • Transparency – clients should know if and how AI is being used in their case.
  • Verification – outputs generated by AI must be reviewed for accuracy and legality.
  • Confidentiality – sensitive data cannot be casually uploaded to open AI models.

EU AI Act

The 2025 EU AI Act imposes additional compliance obligations, especially for systems deemed “high risk”.[3] Legal professionals must assess whether their AI tools meet transparency, security, and data protection standards. Violations could lead not only to ethical sanctions but also to regulatory penalties.

Chances and challenges

The above recommendations are clear: AI is a tool, not a substitute for legal judgment. As with cloud computing years ago, the legal profession must adapt, and always with human oversight. The profession stands at a technological crossroads – lawyers who use AI wisely will gain a competitive edge, while those who do not may face reputational or legal disaster.

The cautionary tale from New York is no anomaly. It is a preview. The challenge is not whether to use AI but how to do so without compromising professional integrity.


[1] https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/;

https:/www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt/

[2] https://kirp.pl/rekomendacje-dotyczace-korzystania-z-ai/

[3] https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en;

https://eur-lex.europa.eu/eli/reg/2024/1689/oj/eng


Michał Wiewióra is a dispute resolution expert specialising in litigation and arbitration across contract law, post-M&A, real estate, and antitrust. 

21 August 2025

Penteris