This is part three based on a transcript from ‘Legal Technology: Risk and Regulation’ video of my three-part series commenting on different views on technologists and lawyers in legal regulations. In this article, I will focus on inevitable changes in technology. The previous parts of the series reviewed existing state of legal regulations and the risks we have for lawyers.

I would like to make a point that technology changes are inevitable. For example, a number of people simultaneously developed combustion engines, lightbulbs, and aircraft at the same time.

If you remember, Google used to be competing with a dozen other search engines. I remember using Altavista extensively. Facebook was competing with Myspace. Uber fighting with Lift.

The technology itself will happen whether or not you have Google, Facebook, or Uber. It could be Altavista, Myspace, or Lift or the Chinese equivalents, Baidu, Weibo, or Didi. There is no way of standing on the shore and stopping the tide. If there is some technological change, it’s going to happen. The question is: do we as lawyers think we have something to contribute?

I think we do. I would say that the rule of law is not something as natural. It does not exist in the environment. You can see plenty of countries around the world with no respect for the rule of law. The natural state of humans is chaos. We have worked hard for hundreds of years only to fight for the rule of law, and we as lawyers were trained from the beginning that it is very important to respect our fiduciary duties, to respect our duty to the courts and duty to clients, follow the laws as they are. Other people do not necessarily have that view. So I think that legal technology that has been designed by lawyers is created in a totally different way if compared to people who don’t have that background.

I would suggest that changes can be made even on a very subtle level that we will just think as anathema as a lawyer. So, for example, what if we change the algorithm that was used to govern sentencing decisions, and if someone was a political dissident decedent, it reduces their chances? Those with the training as a lawyer would not stomach such a thing. Non-lawyers – who knows?

I think as lawyers, we have something very important that we can contribute, and we shouldn’t be held back by that. We shouldn’t be held back by the regulations that were designed for lawnmower deaths. We should not apply those regulations to technology at all because they hold us back. Instead, we should look for things that might cause systemic risk.

It is exactly what legislation should be. I would like to suggest to have something more outcomes-based. So if you create a systemic problem, through whatever action, then that is an offence. Don’t try and regulate actions, instead regulate the outcome and let legal tech companies do whatever they need to do to prevent that outcome.

For examle, you could say: if you create a document that is used a million times by different people, and this document is totally negligent, then this could create a huge problem. Let me give you an example of what I’ve found.

In Australia, people will often set up Limited Recourse Borrowing Arrangement in their Self-Managed Superannuation Funds, and one of the pieces of doing that is you need a Bare Trust. One person holds something on Trust for someone else.

Banks will not want to have anything else in that Trust and so what I’ve seen in about 40% of Bare Trusts that I’ve set up for Self-Managed Super Funds is that they will set up this Bare Trust and then purchase property. Nothing else, they say: ‘I hold this property on Trust.’

But, when they are declaring the trust, they don’t have any subject matter, don’t have any trust property. From what I’ve seen, there’s a huge percentage of these Bare Trusts that are wrong, and this is just something that I’ve observed in my very narrow field of tax law. I’m sure that when you look to things much broader, you would see plenty of other areas.

We will have this kind of problem with Bare Trusts because the banks have created a fantastic system – these are all the things we want. Tick box A, B, C, D, and it’s created this broad repetitive problem that is now across say 30-40% of Limited Recourse Borrowing Arrangement in Australia.

How do we prevent that? The existing regulations will look for one particular law firm who has done this in Australia to hold accountable, and it will not help. If a firm has created tens of thousands of such documents, how do you find the harm? What if multiple firms have done this error? How do you prosecute them? I think this brings us back to lawnmowers versus terrorist attacks question.

As humans, we have an innate understanding of long-tail effects, big risks. We are programmed to recognize that thing moving over there in the bushes could be a tiger, and we will react. Let’s be afraid of things our instincts alert us to! We shouldn’t use mathiness or scientism to push down our natural instincts.

In conclusion, I think that we should avoid mathiness, avoid scientism, avoid trying to apply something designed for humans to robots, and realise that robots have their own specific needs for regulations.

Watch the Legal Technology: Risk and Regulation video here:

Adrian Cartland is the Creator of Ailira, the Artificial Intelligence that automates legal information and research, and the Principal of Cartland Law, a firm that specialises in devising novel solutions to complex tax, commercial, and technological, legal issues and transactions.