ChatGPT is the talk of Twitter and the legal tech sphere. Everyone is posting the interesting conversations that they have had, followed shortly by predictions of how ChatGPT will replace this and that task. There are apparently hundreds of people in legal tech using GPT-3 or similar to create a landing page and connect GPT-3 to some amount of localised training and launching as a start-up AI platform for law.  Yes, ChatGPT can generate some excellent text. But can it replace lawyers, in whole or in part?


Rambling Like a Bluffing Student

In my view the best description of the accuracy of generative text programs is that they can produce text like a rambling University student who attended all of their lectures, but didn’t do all of the reading. That is, someone who knows what the question is about and the general nature of the answer, but didn’t pay close enough attention to know the answer with certainty. So they answer, and pretend that they know what they are talking about.

University students will often get a tempted to bluff their way through an assignment and write some large amount of text that seems somewhat correct but upon closer inspection are not. Chat GPT can do an excellent impersonation of that level of text. But what is the point? How is humanity served by another university student trying to bluff their way through a topic.

Now of course a student who is bluffing sometimes hits on useful points. The arguments might be internally consistent, yet misconceived overall. Or maybe just misconceived in one or more critical facts. Given luck, the bluffing student is likely to hit sufficient marks to pass. For this reason, bluffing continues and university students write long and rambling pieces of text to fill up a word count.

Passing The Bar Exam

An enterprising lawyer ask ChatGPT the questions in a bar exam and created the answers. Chat GPT got sufficient marks to pass.

As someone who has built an AI that it has passed university exams (Ailira pass the university law text exam in 2016) I think this is an interesting demonstration of the power of AI in law. However, there is an interesting difference between generative AI that produces text and a semantic search engine like Ailira. And that difference is accuracy. Ailira got 75% when she passed the Adelaide University test law exam. More specifically, Ailira allowed by girlfriend (and now wife) Sarah, Speech Pathologist with not much knowledge of tax law to pass the university exam. Sarah asked Ailira the questions that were asked in the exam and Ailira found text from tax law that appear to answer these questions. What this means is that the source and accuracy of those answers can be recorded and considered for accuracy.

When ChatGPT is generating text it is doing so disconnected from references. It is not a search engine If you ask ChatGPT to cite a case, it might make up some real sounding cases. However, fake cases are not good authority! If ChatGPT writes some answers that so happens to be correct, there will be difficulty in actually checking whether they are correct. If I ask ChatGPT to explain the rule against perpetuities and correctly it does so, what utility is that to me? I can either confirm that the answer is correct because I know the answer already, in which case I did not need to ask it. Or secondly, I don’t know the answer in which case I am at risk of been wrong. What use is wrong law?

There is little harm to taking risks and bluffing a way through exams and assignments (although quare whether the study fees are worth it if all you do is bluff). But in law practice, making mistakes mean negligence.

Negligent Templates

Online legal documents are a well established product. They do not constitute the provision of legal services because they are merely the provision of stationery. Legal stationery has hundreds of years of history and case law. It is perfectly suitable to provide a template online for users to fill in. If the user fills in the template incorrectly, it is not the fault of the template provider. However, if the template itself is defective then the template provider can be liable for negligence.

It is not too difficult to ensure that a template is rigorous and correct so instances if negligent templates are infrequent – especially compared to the volume of product sold. However, production of text by ChatGPT for a fee is also the provision of a template. Even though we might not consider the template to be so bound as in a traditional boiler plate, or recent work document, any of them has the ability to change. In ChatGPT there is a large potential variance from the root training and drills.

The problem with such generation is that if there is a technical error in the drafting or an error of law or fact, then the provider of the service will be negligent for having provided a negligent template. Notwithstanding that they would likely never have never seen that particular piece of text that was generated by their program!

For example, consider the well established direct-to-consumer product of trust deeds. There have been templated document assembly of these for decades. Let’s suppose you train ChatGPT on a number of trust deeds and teach it to generate trusts in accordance with the details placed in there by users. This might give the opportunity for much greater customization of trust deeds – a good thing. But what if a user generates a trust deed and its deficient in some technical clause. Say, the user is in a jurisdiction that still has the rule against perpetuities (or more particularly, the rule against remoteness of vesting) and the trust deed generated does not vest within 21 years of the end of a life in being. The user generates this defective trust deed which later fails and causes damages and loss. The service provider of the GPT-3 enables trust deed generator would be liable for that damage and loss.

Of course, the are a couple of easy remedies for this. Firstly, a service provider could say that this is not to be reliable upon and it is for amusement purposes only. In which case, why would a user pay money for amusement of law (although this could be a niche amusement tool for lawyers who are interested in archiving legal jokes)? Secondly, a lawyer could review the document and confirm whether or not is correct

Practical Uses of ChatGPT in Law

ChatGPT can be a great tool to draft an example of a clause or letter or speech or something else that you have a clear answer for and that you can easily review. Many published case summaries are at the level that can be generated by an AI like ChatGPT. We can use AI to lessen the mental load of drafting routine tasks. Text generators are great tools for lawyers, rather than to replace lawyers.

A human reviewing the work of a computer is the optimum use of AI. Robots and humans are symbiotic and should work together. Technology should provide leverage to human’s ability. But it means the oversight of humans.

Where I think the most likely use of generative AI in law will be is in law-adjacent areas; things that are used in legal practice but are not a core aspect of it. I think that most law firm blog posts could be written by an AI because they lack insight and critical thinking. There is an endless stream of listicles and high level summaries of principles or case updates. (No, this article was not written by ChatGPT – that has been done to death already). But if you want to quickly write “10 things to know about challenging your traffic infringement” ChatGPT can write that in seconds. And then re-write it if you don’t like the first one. And within minute you can create an article that is publishable.

Writing an article is not a core part of legal practice, but it is adjacent and useful for it, and this and similar tasks are something that can be easy done with a generative text AI.

The Very Important Task ChatGPT Can Do

In my view, there is a very important task that a generative text AI can do, and do exceedingly well: write legal in-jokes. Lawyers love little more than obscure humour with arcane references. Even better is when you can intersect multiple nerd interests. Writing a poem, a nerdy article, or conference joke, can take an immense amount of time, and it a long way from producing billable material. But it is precisely at the level that a University student who broadly – but not specifically – understands the material can do, if they have some confidence and a LOT of time to waste. And ChatGPT can do that in seconds.

And so to demonstrate this skill, I have asked ChatGPT to produce a Sonnet, in Shakesperean form, explaining the rule against perpetuities.

“Eternity’s Bound: A Sonnet on the Trust Law’s Rule Against Perpetuities”

The trust law’s rule against perpetuities,

A safeguard for the future, must be met,

That interest shall vest, if at all, not late

Than twenty-one years after life is set.


The fertile octogenarian, a sight

To marvel at, yet not a life to count,

For in this rule, the future must be bright

And not held hostage by one’s amount.


The unborn widow, too, is not a life

To base this rule upon, for she is yet

To take her first breath and begin her strife

In this world, and cannot be a debt.


The slothful executor, too, must hurry

For time is of the essence, as is fair,

And magic gravel pit, though rich, must bury

Its wealth unless the rule is met with care.


Thus, let this rule against perpetuities

Ensure fair distribution for centuries.