
Canadian Courts Grapple with AI Use After Multiple Cases of Fake Legal Citations
Fake cases fool the courtroom
Truth must guide justice
Canadian courts are establishing new guidelines for artificial intelligence use in legal proceedings following several incidents of AI-generated fake case law citations, including a prominent case in Vancouver's Supreme Court.
In the December 2023 case of Zhang v. Chen, lawyer Chong Ke submitted AI-generated fake case law citations created by ChatGPT to support an application for children to visit China [1]. The incident led to a Law Society of B.C. investigation and sparked nationwide discussions about AI's role in courtrooms.
Similar AI 'hallucinations' have since been identified in other Canadian legal venues, including cases before the B.C. Human Rights Tribunal, federal Trademarks Opposition Board, and B.C. Civil Resolution Tribunal [1].
In response, Canadian courts have implemented varying regulations. Alberta and Quebec now require human verification of AI-generated submissions, while the Federal Court mandates declarations when AI is used to create court documents [2].
Legal experts remain divided on AI's future role. Katie Szilagyi, a University of Manitoba law professor, notes that many lawyers already use AI for tasks like drafting memos [1]. However, Justice Peter Lauwers of Ontario's Court of Appeal warns that AI in the legal field is 'overhyped' and 'not ready for prime time' [3].
The Canadian Judicial Council has taken a firm stance, with Chief Justice Richard Wagner declaring that 'AI cannot replace or be delegated judicial decision-making.' However, the Council acknowledges potential opportunities for AI to support judges in limited capacities [1].
A particular concern among judges is the threat of deepfake evidence. UBC law professor Benjamin Perrin emphasizes that the chain of custody for digital evidence has become increasingly critical as AI technology advances [2].