Health Care Providers & Artificial Intelligence – User Beware!

The use of AI in the legal world has attracted a fair bit of attention lately and unfortunately it’s for the wrong reasons. As with most progressive advancements in the field of legal research, the lawyers in the US tend to lead the way and their newest technology and strategies eventually trickle up to Canada. The use of AI in legal research is no exception. Recent events however serve as a cautionary tale for any lawyer and any other professional who uses AI for research purposes, paper writing, and even treatment plan drafting.
If you haven’t heard of “AI hallucinations” yet and you are using AI in your everyday work then you should take a moment and read about it. More than a handful of lawyers in the US and more recently in Canada have been reprimanded by judges for using AI to draft legal briefs they filed with the Court. What these lawyers didn’t know is that AI can and will create fictitious legal cases. You read that correctly. AI will fabricate legal cases (known as precedents) that do not exist in order to justify an argument the lawyer is advancing.
Recently, Justice Joseph F. Kenkel, a judge with the Ontario Court of Justice, ordered a criminal defence lawyer to refile his defence submissions, finding “serious problems” in them. Justice Kenkel noted that one case cited appeared to be fictitious, while several case citations referred to unrelated civil cases. Still other citations led to a case named that was not the authority for the point being made. He ordered the lawyer to prepare a new set of defence submissions with the following condition: “Generative AI or commercial legal software that uses GenAI must not be used for legal research for these submissions”.
A French lawyer (Damien Charlotin) has been tracking legal decisions world-wide in cases where generative AI produced hallucinated content. In many cases, the lawyers on the list used fake case citations. The list identifies 137 cases so far. In the list’s first Canadian case, Zhang v. Chen, B.C. Justice D. M. Masuhara reprimanded a lawyer on February 23, 2024 for inserting two fake cases into a notice of application that were later discovered to have been created by ChatGPT. The judge, who described the errors as “alarming”, ordered the lawyer to pay court costs personally. “As this case has unfortunately made clear, generative AI is still no substitute for the professional expertise that the justice system requires of lawyers,” Masuhara wrote in a ruling on costs.
Justice Masuhara’s comments were made in the practice of law however they apply to all professions. What health care providers need to know is that AI hallucinations are not limited to the world of legal research. Various AI programs have created fictitious medical studies, engineering studies, and lab studies to name just a few.
Health care providers should therefore be cautious when using generative AI in their practice. Whether it’s to support treatment recommendations in the form of a Treatment Plan, a progress report for an insurance company, or when using AI to draft an expert report.
With respect to expert reports, the rules of Civil Procedure in Ontario were recently amended to address the use of AI by experts. Any health care provider or other expert witness who provides an expert report must sign a Form 53 (Acknowledgment of Expert’s Duty). The Form was amended in 2025 to include the following: “I certify that I am satisfied as to the authenticity of every authority or other document or record to which I have referred in the expert report accompanying this form”.
Recent events have taught us that AI hallucinations are a common occurrence across various professions. While AI may be helpful to assist in generating ideas and help doing basic research, health care providers should be wary about placing too much reliance on this relatively novel tool.