Hallucinations Are a Model T Crash

When cars first appeared on public roads, they were genuinely dangerous.
They crashed. They frightened horses. They injured pedestrians. They broke down constantly. People died. Newspapers ran moral panics about the recklessness of motorists and the irresponsibility of allowing such machines anywhere near civil society.
The response was not to ban cars.
The response was to change behaviour around them. We introduced licences. We required training. We imposed rules of the road. We accepted that a powerful tool would cause harm if used badly, and that the solution was competence, not prohibition. That is exactly where we are with AI.

Hallucinations Are a Known Failure Mode

Large language models sometimes produce outputs that look plausible but are false. This is commonly described as “hallucination”, which makes it sound mysterious or pathological. It is neither. The system is doing exactly what it was designed to do. It generates statistically plausible text based on incomplete constraints. When the constraints are weak, the output can be confidently wrong. This behaviour is openly documented by every serious model provider. It is not a surprise and it is not a defect in the moral sense. Treating hallucinations as proof that AI is inherently unsafe is like treating early car accidents as proof that engines were unethical.
The correct question is not whether hallucinations occur. It is whether they are anticipated and managed.

 

Lawyers Already Live With Error

Legal practice has never been error-free. We rely on juniors, clerks, research platforms, precedent banks, and templates. Every one of those sources can be wrong. None of them are trusted without review. A junior can misunderstand a case. A research platform can miss an authority. A precedent can be out of date. None of this is new. What has never changed is where responsibility sits. The lawyer signs the advice. The lawyer files the document. The lawyer carries the risk.
AI fits naturally into that ecosystem. It produces drafts, summaries, and suggestions. It does not absolve anyone of the obligation to check.

 

Where the Real Failures Occur

The cases that cause concern are not cases where AI behaved unpredictably. They are cases where lawyers relied on output they did not verify and then acted as if the tool, rather than their supervision, was responsible.
That is not a new ethical problem. It is the same professional failure that occurs when a lawyer files a document without reading it properly, regardless of how it was drafted.
Banning the tool does not fix that failure. It simply hides it.

 

Hallucinations Are Often Easier to Spot Than Human Errors

There is an uncomfortable truth here. AI hallucinations are often easier to detect than human mistakes.
They tend to be oddly phrased, over-specific, or confidently wrong in ways that feel slightly off. Citations are close but not quite right. Paragraph numbers are plausible but unfamiliar. To a lawyer who knows the area, these errors are visible. Human errors are often worse. They look orthodox. They survive multiple reviews. They are harder to detect because they sound right. The idea that AI uniquely threatens accuracy misunderstands how mistakes actually enter legal work.

 The Actual Professional Risk

The real risk is not that AI produces incorrect text. The real risk is that lawyers use it without understanding its limits, without appropriate supervision, and then attempt to paper over that use with vague or absolute declarations. That is where the ethical problem lives. Not in the technology, but in the conduct. Professional responsibility has always drawn the same line. You may use tools. You may delegate tasks. You may not delegate judgment, and you may not make statements of fact that are untrue.
AI does not change that. It just makes the consequences arrive faster.

Learning to Drive

 Cars did not become safe because engines improved alone. They became safer because we taught people how to use them, set expectations about responsibility, and punished misuse. AI will follow the same path. The lawyers who get into trouble will not be the ones who used it. They will be the ones who never learned how to drive it and insisted on pretending they were still riding a horse.

This Article Was Created By,

Adrian Cartland

Principal Solicitor at Cartland Law
Adrian Cartland, the 2017 Young Lawyer of the Year, has worked as a tax lawyer in top tier law firms as well as boutique tax practices. He has helped people overcome harsh tax laws, advised on and designed tax efficient transactions and structures, and has successfully resolved a number of difficult tax disputes against the ATO and against State Revenue departments. Adrian is known for his innovative advice and ideas and also for his entertaining and insightful professional speeches.