鶹Ƶ

Skip to content

Does artificial intelligence deserve a seat in Canada's courtrooms?

VANCOUVER — The case law looked real to Fraser MacLean.

VANCOUVER — The case law looked real to Fraser MacLean.

It was December 2023 and the Vancouver-based family court lawyer was reading citations in an application by opposing counsel Chong Ke, who wanted an order allowing the children of her client to visit him in China.

"I read it there in the courtroom for the first time, and there were two cases talking about travel overseas and maintaining the cultural ties for the children," MacLean said, initially thinking the case law would be difficult to oppose.

But the cases weren't real.

They were so-called hallucinations created by the generative artificial intelligence tool ChatGPT, which Ke admitted using to help prepare the application for the British Columbia Supreme Court case.

"Right off the bat, the summaries looked 100 per cent real," said MacLean. "I had no idea when I read them in court that day that these were fake."

The case of Zhang v. Chen would result in a reprimand for Ke from the judge and a Law Society of B.C. investigation. The case then ricocheted around Canada's legal sphere as it weighed the potential pitfalls of allowing artificial intelligence into courtrooms — such as fake case law or deepfaked evidence — against the promise of reduced workloads and improved access to justice.

More suspected AI hallucinations have since been identified among citations in a series of other legal venues — including a B.C. Human Rights Tribunal case, a pair of disputes before the federal Trademarks Opposition Board, and a small claims case in the B.C. Civil Resolution Tribunal.

Courts and law societies across Canada have issued directives on the use of AI by lawyers and court staff. But the rules and guidelines are inconsistent and some doubt whether they are being universally followed.

Some courts, including those in Alberta and Quebec, have a "human in the loop" rule that AI-generated submissions must be verified by someone, while many require declarations if AI is used to prepare submissions.

In December 2023 — about a week after MacLean's office informed Ke that her AI-generated case law could not be found — the Federal Court issued a notice informing litigants that if they used AI "to create or generate new content" in a court document it would have to be declared.

The court also said it would not use "automated decision-making tools" to help render judgments "without first engaging in public consultation."

Courts may be trying to draw a line, but the benefits of AI suggest there may be no holding it back from some tasks within law offices.

Katie Szilagyi, an assistant law professor at the University of Manitoba in Winnipeg, said many lawyers are already using generative AI to streamline efficiency with tasks such as drafting memos.

"They're absolutely going to be used, and they already are, really broadly. There's a huge amount of investment in this legal tech space," she said.

The Canadian Bar Association issued national guidelines last fall urging caution and suggesting lawyers use AI "as a tool, not as a crutch in the delivery of legal services."

It says lawyers should "consider disclosing if they intend to use generative AI and provide explanations about how the technology will be used," such as research, analysis, document review or trial preparations.

"What's important is to ensure that lawyers are using technology responsibly and are clear on what its limitations are, what it's good at and what it's not good at," said Szilagyi.

She said lawyers using AI must also balance ethical obligations to their clients, such as solicitor-client privilege.

"There's a possibility that the information that you share is then being stored on a server somewhere in a way that is contrary to your ethical obligation, and would result in a privacy breach of really sensitive information," she said.

Benjamin Perrin, a law professor at the University of British Columbia, has begun teaching a course about the uses of artificial intelligence within the criminal justice system.

He said "the stakes are high" within the legal sector, as AI offers the potential to reduce workloads, and improve efficiency and access to justice, but also raises concerns about fairness, bias, transparency, and the preservation of judicial independence.

"There's really good reason to be cautious," Perrin said of adopting AI technology within the criminal justice system. "Layering AI on top of an existing failing, flawed system is, quite frankly, a recipe for disaster."

IS A 'HUMAN IN THE LOOP'?

The most prominent failure of AI in courtrooms came about seven months before Ke's B.C. blunder, in the U.S. lawsuit known as Mata v. Avianca.

Passenger Roberto Mata was suing Avianca airline for injuries said to have been suffered during a flight. But a submission by Mata's lawyer was littered with fake case law, invented by ChatGPT.

Since then, there have been similar instances in Australia, Britain and beyond.

Szilagyi said using AI may help lawyers with menial tasks, but they must always be cautious about automation bias — the belief that technology is intellectually accurate.

One solution, she said, is to follow the "human in the loop" principle to verify case law.

But Daniel Escott, a research fellow at the Access to Justice Centre for Excellence with their Artificial Intelligence Risk and Regulation Lab in Victoria, said there needs to be more checks and balances to ensure lawyers are not relying too heavily on AI to the detriment of their clients.

He also doubts whether lawyers are following hard rules set by the courts.

In February, Federal Court Chief Justice Paul Crampton told the host of a Canadian Bar Association podcast that out of almost 28,000 legal filings the court received in 2024, after the AI-declaration rule was imposed, “only three or four” were said to involve the use of AI.

Escott said "that doesn't track."

"If you look at the filings at the Federal Court, some (small law firms) are filing three or four times more than law firms 10 times their size," he said.

He said it also appears self-represented litigants are "much more likely to make a declaration that they've used AI."

"The users of the justice system seem to have much less of a problem with transparency when it comes to the use of AI than the lawyers who are supposed to be serving them," he said.

However, he noted, one benefit of AI is that it provides an opportunity to improve access to justice.

Perrin agreed, saying most Canadians cannot afford a lawyer, so AI may be helpful in drafting legal arguments, although there was a risk of them being rejected by judges.

But Perrin said that what judges are "most concerned" about is deepfake evidence.

He said the chain of custody of digital evidence has become even more important as it becomes harder to distinguish whether photos, documents, audio, and videos are genuine or AI-generated.

"I think we're entering a new era when it comes to proof," he said. "How do you prove things when you can't necessarily trust what you see and hear because it could have been manufactured by AI tool?"

Peter Lauwers, a justice of the Court of Appeal for Ontario and chair of the Civil Rules Committee's Artificial Intelligence Subcommittee, said this was the major concern with AI — "that we don't end up picking up stuff that is fake."

"So, that would include hallucinated judgments," he said, adding that rules are being prepared in Ontario to address the concern about deepfakes undermining the courts.

Those rules, he said, will include identifying what software program was used to create AI-generated content and providing evidence that shows output is valid and reliable.

Lauwers pointed to the use of accident reconstruction as an example of how AI could be successfully used in the courts.

"You can model what was going on in a particular moment using this kind of software and that kind of expertise," he said. "So, we're reasonably confident that those kinds of artificial-intelligence-generated exhibits are good and are reliable."

But overall, Lauwers said there was a general consensus that AI was "not ready for prime time" in the courts.

"AI in the legal field is overhyped. It over-promises and under-delivers," he concluded.

'A HUGE WAKE-UP CALL'

As for the use of AI by judges, the Canadian Judicial Council has indicated that their decision-making authority should never be passed on to artificial intelligence.

Chief Justice of Canada Richard Wagner said in an October news release that judges must "maintain exclusive responsibility for their decisions."

"AI cannot replace or be delegated judicial decision-making," Wagner said. "At the same time, the council’s new guidelines acknowledge there may be opportunities to leverage AI responsibly to support judges."

The guidelines say any AI tool used in court applications should be able to provide understandable explanations for their decision-making outputs, and that courts should regularly track the impact of AI use.

In B.C., MacLean said Zhang v. Chen served as a "huge wake-up call," and has since been cited in other cases involving AI hallucinations.

"What’s scary about these AI hallucinations is they’re not creating citations and summaries that are ambiguous, they look 100 per cent real," MacLean said.

"That’s why it’s important that judges and lawyers need to be vigilant in double-checking these things."

Ke said in an email cited in the judgment that when she prepared the submission she used the ChatGPT case-law suggestions without checking them because she "had no idea that these two cases could be erroneous."

The Law Society of B.C. said its investigation into the case was concluded "and the concerns raised have been addressed with the lawyer."

Ke could not be reached for comment.

MacLean said banning AI was not the answer, and instead suggested law firms implement policies requiring transparency, and for lawyers to be trained to spot AI hallucinations.

And the risks? MacLean said the judge in Zhang v. Chen, Justice David Masuhara, said it best.

“Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court,” Masuhara wrote in his February 2024 ruling.

“Unchecked, it can lead to a miscarriage of justice.”

This report by The Canadian Press was first published March 29, 2025.

Brieanna Charlebois, The Canadian Press

push icon
Be the first to read breaking stories. Enable push notifications on your device. Disable anytime.
No thanks