Innovation in the professional services sector is often stifled not by lack of opportunity, but by the very nature of the professions themselves. In the tax domain, this challenge is particularly acute. Unlike software development or design, tax is not a skill one can acquire by tinkering in a garage. It demands years of rigorous training, deep contextual understanding, and a mastery of complex legal frameworks.
Nature of the Tax Profession
Risk aversion is not a flaw in tax professionals, it is a feature. It enables them to serve clients with precision, foresight, and caution. Their daily work often involves navigating the wreckage of failed ventures, broken relationships, and poor decisions. This exposure breeds cynicism and discourages creativity. Why invent a new legal structure when precedent offers a safer, billable path?
Clients, too, are cost conscious. They prefer certainty over novelty. As a result, legal creativity is often seen as inefficient, even indulgent.
Yet technology offers a way forward, not as a replacement for professionals, but as an augmentation. Human lawyers will always be needed to interpret complex information, think laterally, and persuade other humans. These are the tasks that define the joy of legal practice. No amount of automation can replicate the satisfaction of crafting a compelling legal argument or solving a client’s deeply personal problem.
Legal automation, then, should be welcomed, not feared. It promises to eliminate the drudgery of due diligence and document drafting, leaving behind a profession that is more thoughtful, strategic, and humane.
Tax professionals whose work involves creative thinking, contextual analysis, empathy, or persuasion should feel particularly optimistic. Clients will always need their trusted advisor. And new roles are emerging AI trainers, legal technologists, and supervisors of automated systems. The future of tax is broader, more dynamic, and more intellectually rewarding than ever before.
Those whose work is susceptible to automation must act quickly. The first movers who master AI will ride the wave of change, not be swept away by it.
Explainable AI
Explainable AI is currently the darling of legal tech conferences. It is touted as essential for algorithms used in law. But I respectfully disagree. The obsession with explainability misunderstands both the nature of machine learning and the demands of legal reasoning.
Let’s begin with a joke:
“When you’re fundraising, it’s AI.
When you’re hiring, it’s ML.
When you’re implementing, it’s logistic regression.”
Behind the humour lies a truth: machine learning is not statistics. Traditional statistical models rely on transparent relationships, correlations between variables, repeatable regressions, and clear assumptions. These models are explainable because they are built on logic and causality.
Machine learning, by contrast, is often more art than science. It uses vast arrays of variables, many of which have no obvious causal relationship, and adjusts models based on what works, not why it works. A model might identify a cat not by its ears or whiskers, but by an abstract pattern of pixels. The process is opaque, iterative, and often irreducible to a set of footnotes.
Consider this interview scenario:
Interviewer: What is 10 + 10?
Candidate: 3.
Interviewer: No, that’s wrong.
Candidate: 7.
Interviewer: No, wrong again.
Candidate: 15.
Interviewer: No, that’s wrong.
Candidate: 19.
Interviewer: Wrong.
Candidate: 20.
Interviewer: Yes, you got the job.
This is machine learning in action. The model improves over time, not because it understands the answer, but because it learns from feedback. The final result may be correct, but the path to it is not explainable in any traditional sense.
So how do we understand such models? Not by dissecting their internal logic, but by probing their outputs. We build an external model of their behaviour, asking questions, testing edge cases, and observing responses. For example, if an algorithm is trained to identify food, we might show it hotdogs and non hotdogs, then test its response to a sausage sandwich.
What Does That Mean for Law?
This raises a critical question: can we make legal decisions based on models that are not explainable?
First, we must distinguish between machine learning and statistical regression. Even if a regression model can predict outcomes with some accuracy, its utility in law is limited. Correlation is not causation. The appearance of umbrellas does not cause rain. Nor does a judge’s morning mood determine guilt.
Second, most variables used in predictive legal models are irrelevant. Court name, jurisdiction, date, solicitor firm, these are metadata, not substance. What matters is the evidence, the law, and the reasoning. Any system that does not read and understand the words of a case is, frankly, statistical junk.
Explainability is not the issue. Relevance is. A model that cannot engage with the actual legal arguments is not just opaque, it is useless.
The tax profession stands at a crossroads. Innovation is possible, but only if professionals embrace technology as a partner, not a threat. And as AI becomes more prevalent, we must be clear-eyed about its limitations. Explainability is a noble goal, but it is no substitute for relevance, rigour, and human judgment.
The future of law is not less human, it is more.
This Article Was Created By.