活动
CUHK LAW CLINDS’s 30th LegalTech Seminar – ‘Revisiting Legal Hallucinations’ by Dr. Eliza Mik (Online)
2026年3月10日
12:00 pm – 1:30 pm (HKT)
Online via Zoom
Dr. Eliza Mik has joined the CUHK Faculty of Law in January 2021. Before academia, she worked in-house for a number of software and telecommunications companies in Australia, Poland, Malaysia and the United Arab Emirates. She advised on software licensing, technology procurement and e-commerce regulation. Dating back to her PhD thesis, Eliza’s interests have always centered on the private law aspects of e-commerce and on general problems of transaction automation, particularly with the use of technologies commonly referred to as Artificial Intelligence. Eliza is also one of the most cited authors on the topic of “smart contracts.” In recent years, as part of a research grant, her academic explorations also include the use of large language models in the legal profession. At present, she is a member of the Inclusive Global Legal Innovation Platform on ODR (iGLIP, Hong Kong), a Research Associate at the Tilburg Institute for Law, Society and Technology (TILT, Netherlands) and an Affiliate Researcher with the Centre for AI and Digital Ethics at the University of Melbourne (CAIDE, Australia).
clinds.law@cuhk.edu.hk
The number of cases involving hallucinations is on the rise. In R (Ayinde) v London Borough of Haringey [2025] EWHC 1383, hallucinated cases, or “fake authorities,” were regarded as professional misconduct. The Court lost its patience with the use of AI by technically uninformed counsel and signaled the grave consequences: “Placing false material before the court with the intention that the court treats it as genuine may, depending on the person’s state of knowledge, amount to a contempt. That is because it deliberately interferes with the administration of justice.” In 2026, it is no longer possible to feign ignorance of the fact that Large Language Models, or “LLMs,” cannot be trusted and that their outputs require meticulous verification. Although the problem of hallucinations was supposed to be fixed in “the near future,” in late 2025 OpenAI finally admitted hallucinations were an inevitable side-effect of text generation. Hallucinations are here to stay: LLMs will continue to produce plausible fluent and seemingly helpful text that will be ridden with inaccuracies and falsehoods. Some confusion is introduced by the excellent benchmark performance of many LLMs and the incessant use of anthropomorphic terms like “understand” or “reason” by AI vendors. Does benchmark performance, we must ask, translate into real-world tasks? Can AI vendors be trusted? Additional problems result from the lack of a universally accepted definition of hallucinations. Consequently, there is no reliable way of measuring their occurrence and evaluating the reliability of specific LLMs or LLM-based tools.
The seminar addresses the following questions:
• What is the legal landscape surrounding hallucinations? How are judges in various jurisdictions reacting to this phenomenon?
• Are there ways of counteracting hallucinations? Are some LLMs more reliable than others?
• What are the legal tasks particularly prone to invoke hallucinations?
• To what extent, if any, can lawyers rely on any of the benchmarks measuring the capabilities of LLMs?
The seminar will also revisit some technical concepts, such as “context engineering” and “retrieval augmented generation” that are indispensable to better leverage the capabilities of LLMs and LLM-based tools.
CLINDS’s 30th LegalTech Seminar – ‘Revisiting Legal Hallucinations’ by Dr. Eliza Mik (Online)
*CPD credits are available upon application and subject to accreditation by the Law Society of Hong Kong (currently pending).
Language: English

