19.04.2025

"AI Hallucinations: A Legal Wake-Up Call"

VANCOUVER — The case law looked real to Fraser MacLean

In December 2023, Fraser MacLean, a family court lawyer based in Vancouver, encountered a troubling situation when reviewing an application by opposing counsel, Chong Ke. She sought permission for her client’s children to travel to China to visit their father. To support her case, Ke cited two legal precedents that, unbeknownst to MacLean at the time, were actually fabricated. The case law was generated by the AI tool ChatGPT, which Ke had used without verifying the information.

The incident quickly escalated into a significant legal controversy, known as the case of Zhang v. Chen. The presiding judge reprimanded Ke, and the Law Society of British Columbia subsequently launched an investigation into her actions. This scenario sparked nationwide discussions across Canada's legal community, highlighting the potential dangers of utilizing artificial intelligence in legal proceedings, especially concerning the integrity of evidence and case law. Questions were raised regarding AI's capacity to provide reliable information and the implications of AI-generated submissions in the courtroom.

In subsequent months, additional instances of false AI-generated case law were discovered across various legal forums, including cases within the British Columbia Human Rights Tribunal and the federal Trademarks Opposition Board. As awareness of the issue grew, courts and law societies across Canada began instituting directives on the use of AI. However, there was a notable inconsistency in these rules, and skepticism persisted regarding their enforcement. For example, some jurisdictions mandated that AI-generated submissions require human verification, while others required a declaration of AI usage in legal documents.

In December 2023, following MacLean's alert that the cited cases were non-existent, the Federal Court emphasized that any use of generative AI in court documents must be declared, and indicated that automated decision-making tools would not be implemented for judgments without public consultation. Despite these efforts to regulate AI's presence in the legal domain, many lawyers have started adopting AI to enhance their efficiency in tasks such as drafting memos and conducting legal research.

Katie Szilagyi, an assistant law professor, noted that AI's role in the legal sector is rapidly expanding, with significant investments aimed at improving legal technology. The Canadian Bar Association introduced guidelines urging legal professionals to exercise caution when using AI tools, advising them to view AI as an assistant rather than a substitute for human judgment. They also recommended ensuring transparency regarding the use of AI and being mindful of its limitations concerning client confidentiality.

Moreover, Benjamin Perrin, a law professor at the University of British Columbia, emphasized the importance of cautious AI adoption in the criminal justice system due to concerns surrounding fairness, bias, and accountability. He highlighted that layering AI on existing flawed systems could lead to severe repercussions in judicial processes.

One prominent case that highlighted AI's dangers involved the U.S. lawsuit Mata v. Avianca, where a lawyer's submission was filled with erroneous case law generated by ChatGPT. Similar incidents have occurred internationally, reinforcing the need for vigilance regarding the use of AI in legal contexts. Szilagyi pointed out the necessity of adhering to a “human in the loop” approach to mitigate risks associated with AI reliance.

Daniel Escott, a research fellow at the Access to Justice Centre for Excellence, raised concerns about compliance with AI usage regulations among lawyers, indicating that there appeared to be a lack of transparency in declarations regarding AI-assisted submissions. Despite these challenges, there is recognition that AI has the potential to enhance access to justice, particularly for individuals unable to afford traditional legal representation.

Perrin also voiced concerns, noting that judges are particularly wary of the implications of deepfake evidence, which poses risks to the authenticity of digital evidence in legal proceedings. The need for robust techniques to validate evidence has become crucial as the reality of distinguishing between genuine and AI-generated materials becomes increasingly difficult.

Peter Lauwers, a justice on the Ontario Court of Appeal, expressed that the legal community must be cautious to avoid accepting fabricated evidence and emphasized that AI technology is not yet dependable enough for court use. He acknowledged the potential role of AI in specific applications, such as accident reconstruction, but overall concluded that its current capabilities in legal contexts are overhyped.

As discussions around AI in the legal landscape continue, the Canadian Judicial Council has delineated that while AI can support judges, decision-making authority must remain strictly human. Chief Justice Richard Wagner reiterated the necessity for judges to uphold their exclusive decision-making responsibilities in the courtroom.

The case of Zhang v. Chen remains a compelling reminder of the challenges faced by the legal system in navigating the integration of AI. MacLean underscored the importance of due diligence in verifying the authenticity of case law and cautions against the potential for AI to disrupt judicial integrity if left unchecked. The repercussions of relying on fabricated information in legal settings can lead to severe consequences, including the risk of miscarriage of justice.