Australian Lawyer Apologises After ChatGPT Fabricates Cases in Murder Trial
An Australian lawyer has been ordered to take an ethics course after submitting a court document containing fake legal cases generated by ChatGPT in a murder trial.
Key Takeaways
- Lawyer Timothy M. Hale-Cusanelli used ChatGPT for legal research in a NSW murder case.
- The AI chatbot invented fake case names and citations about murder convictions and mental illness.
- The prosecution discovered the fabricated references, which did not exist.
- The judge mandated an AI ethics course for the lawyer as a consequence.
The AI-Generated Error
Timothy M. Hale-Cusanelli, representing a defendant in a New South Wales murder case, used the AI chatbot to find historical examples where a murder conviction was overturned due to the defendant’s mental illness. The examples he submitted, however, were entirely fabricated by ChatGPT.
Discovery and Apology
The prosecution identified the error, pointing out the cited cases were non-existent. Hale-Cusanelli apologised to the court, stating he was “unaware that ChatGPT could generate false information.”
Judicial Order and Broader Implications
The judge responded by ordering the lawyer to complete a course on the ethical use of AI in legal practice. This incident underscores a critical risk in legal research: AI chatbots can produce convincing yet completely false information.
It forces a re-examination of a lawyer’s fundamental duty to verify all information submitted to the court.
A Growing Pattern of AI Misuse
This is not an isolated event. It follows a pattern where legal professionals have relied on AI with damaging results. Last year, a lawyer in the United States was sanctioned for using ChatGPT to draft a legal brief containing fictitious case citations.
The Australian case serves as a stark reminder: while AI is a powerful tool, its use demands caution, critical thinking, and rigorous verification, especially within the justice system.



