Kardashians, Lawnmowers and Terrorists: Rethinking Risk in Legal Technology Regulation
The Kardashians unwittingly raised an important risk of legal technology, which I believe needs new regulation to address. Kim Kardashian tweeted an observation by the US Statistician General on how many people were killed by lawnmowers and suggested that we should not be worried about terrorists because in 2017 there were 9 people killed by terrorists in the US and 69 people killed by lawnmowers i.e., lawnmowers are a greater risk than terrorists.
The statistician and risk theorist Nassim Taleb responded, quite correctly, that there is a big difference between these: you shouldn’t compare them because lawnmowers aren’t trying to kill you. More technically, the probability densities of lawnmowers are normally distributed a ‘bell curve’ distribution similar to the height of the population: the smallest person in the world is 3½ feet and the tallest person is 9 feet. Everyone is inside that distribution; you’ll not find anyone who is a hundred feet tall.
This can be contrasted with wealth, for example, which is obviously not normally distributed. There might be a lot of people of median wealth, but there are people with hundreds of billions of dollars. Terrorist attacks have a fat tailed distribution.
In 2017, the number of people who died from terrorist attacks in the US was 9, but the probability density is very thin in comparison with lawnmowers. This means that there is a small likelihood of any particular number of people dying from a terrorist attack, but there is a non zero chance of something extreme happening. There might be 9 people killed, or zero, or a thousand, or two thousand, or ten thousand. There is also a non zero chance of a hundred million deaths from terrorist attacks. But there is a zero chance that such extreme numbers of people die from lawnmower accidents.
Regulating Different Risk Distributions
Effective regulation of normally distributed risks is very different from fat tailed risks. Think: financial planner negligence vs financial system collapse; food poisoning vs GMO environmental destruction; petty criminals vs terrorists.
Indeed, regulations that are effective in normal distributions will often merely mask long tailed risks. For example, the probability density of road deaths caused by individual drivers is normally distributed. Autonomous vehicles may reduce the average number of road deaths. However, if the roads were full of driverless cars under centralised control, there is a non zero probability of a software malfunction causing every car to simultaneously crash.
Human professionals present normally distributed risks. For example: misappropriating trust funds, negligent practice, and dishonesty are all risks of human lawyers. Present regulations are (at least) reasonably suited to prevent and deal with these risks.
Technology will typically have safeguards against these risks already built in, and so the regulation will at best be redundant. But legal technology presents long tailed (low probability high impact) risks.
The long tail risks are accelerated by the asymmetry of penalties for failing. If a failure is big enough, it is common for government to intervene and soak up the losses. However, this leads to a distortion of incentives for the creators of the technology: “heads I win, tails you lose.” For example, banks will pay big bonuses to executives who grow their business and take hidden systemic risks. And when the banks fall over, the government is forced to intervene to stop widespread catastrophe.
The best prevention against long tailed risks is to create ‘skin in the game’. There would be far fewer banking collapses if, instead of bailing out failed bankers, the government allowed them to fail but then jailed all of the executives.
The Failed Blockchain Hypothetical
An example was given by futurist Mark Pesce at the Australian Judicial Administration Conference 2018 as to what blockchain could be used for in the future of legal services. Mr Pesce proposed a blockchain-based smart contract that holds funds in escrow until certain conditions are met. The blockchain would provide a transparent ledger (preventing fraud and enabling trust in the system without any regulatory supervision), automate the transaction (reducing transaction time, cost, and uncertainty for the parties), and not be susceptible to the interference of an individual (e.g., dishonest or merely frustrating behaviour).
Notwithstanding these potential benefits, it would be practically impossible for a South Australian lawyer to comply with their trust account obligations vis à vis such a system. For example, each ‘smart contract’ transaction would breach Reg 28(2) Legal Practitioner Regulations 2014 (SA) because funds in escrow are ‘trust money’ and there is no BSB number to record, notwithstanding that recording, say, the ‘public key’ a blockchain address is actually more accurate. This is assuming that the funds used are even Australian dollars, let alone cryptocurrency which would almost certainly render the whole trust account non compliant.
A discourse on the potential legal regulatory breaches of this one example is deserving of a separate paper. The regulatory string has been pulled on those hypothetical smart contracts in South Australia and their development and nature ceded to non lawyers and non residents. The string cannot be pushed either: a law firm could not be forced to create this hypothetical system, nor the consumer market forced to use it.
But we still would face the systemic risks of such a system, which would include the loss of all funds and contracts on it from the non zero risk of total failure. Notwithstanding the immutable nature of the blockchain, there have been many high profile failures, including MTGox, DAO, and widespread market manipulation.
The Conundrum of Modern Search Engines
The concept of searching the internet is an emergent technology. But a subset of this is an emergence in relation to searching legal information. Internet searches are the starting point for most consumer legal queries. They are usually the finishing point too. Even for lawyers, it is the starting point for most legal research, before moving on to traditional sources such as legislation, published commentary, and cases.
Modern internet searches go well beyond their original keyword search modified by web links ranking systems. Each search may have data fed into the algorithm based on: past searches of the user and others; past browsing history of the user; age, gender, location, wealth, nationality, interests of the user; email, documents, videos, and pictures browsed or created by the user. This is by far from a complete list. Chatbots built by search engine companies can also interact with humans to the extent that they are difficult to distinguish as being robots. The tailoring of a ‘simple’ internet search uses information beyond what many lawyers would obtain before giving legal advice. The results will often highlight a particular result as being the favoured one. That information provided may or may not be legally correct, and may or may not be relied upon for the basis of informing legal decision making.
Whether a modern internet search constitutes the provision of a legal service should be an intellectual dividing point: if you think AI could constitute the provision of legal services, then surely AI in a modern internet search is already doing so. Alternatively, if it is not, then we do not need to bother with debates on whether the automatic provision of legal services is caught under present definitions as most specific legal technologies are less advanced.
I am of the view that modern internet searches do not constitute legal services and proceed on that basis. But I acknowledge that there is an alternative view that may be reasonably taken.
Benefits and Risks of Technologies
Advancements in searching technology make law more accessible it is certainly a net benefit to the public, the profession, and the function of law. However, there are of course long tail risks that would not be present from human-provided legal services. While long-tail risks are by definition difficult to predict, one example is the risk of dissemination of widespread incorrect information at a scale of error that would be impossible for an individual or law firm to fail at.
This is not hypothetical future technology modern internet search engines are almost universally used and utilise advanced artificial intelligence to satisfy legal needs. How they are regulated (or not regulated) sets an important precedent that will inform future legal technology regulation.
Indeed, most legal technology (and most AI) is a derivative or subset of an existent creation of the ‘Big Nine’ tech companies: the ‘G-Mafia’ of Google, Microsoft, Amazon, Facebook, IBM, and Apple, and ‘BAT’ Baidu, Alibaba, and Tencent. That is, most legal technology will use one or more components created by the Big Nine in their tech ‘stack’. Even at their most distant, legal technologies will be competitors to some technology created by the Big Nine. It is difficult to imagine how a legal technology company could innovate something that had not previously been developed in any way by the Big Nine.
If there is to be any regulation of legal technology (and I think there should be), then it must somehow be capable of regulating the mega technologies that are present today. Besides the problems of emergence and string pulling, it is practically difficult to force technologies such as internet search engines into the existing regulatory mould: their products and revenue models are radically different; their location base changeable by a movement of a server; their size and popularity would likely lead to popular legislative reaction against regulator enforcement, to mention but a few problems.
Suggested New Regulations
- Regulatory certainty mechanisms: A mechanism to provide certainty that new legal technology is permissible and separate from the existing set of human-centric regulations. Examples include a ‘regulatory sandbox’ and binding regulatory rulings.
- Targeted harm regulations: Separate regulations to counter specific potential harms of legal technology. For example, requiring providers of legal information to monitor for correctness or ensuring automation tools are used appropriately.
- Systemic risk offence: A new offence of ‘causing systemic risk’ an in personam penalty against technology creators if widespread harm occurs. This ensures ‘skin in the game’ and aligns incentives with legal system integrity.
Innovation, Artificial Intelligence and the Future of the Professions
There is pressure all around the world for professionals to provide more services for less cost. There are more regulations and risks and disputes that require legal assistance, but clients are not willing to pay for it to be completed under the traditional manner. Clients look at their flat screen television and see how much bigger and better it is than the screen that they had but a year ago, and wonder why their lawyer cannot likewise improve. The tropical storm of innovation that has long been gathering and hanging heavy above the legal profession has begun to burst, and with it will come a watering of gardens for well placed practitioners, an unpleasant dampening for those out in the rain, and a relief at the sudden change in humidity from the general public.
This Article Was Created By.