ChatGPT for Lawyers: The Lawyer’s Dilemma in the Age of AI

Another day, another prediction that Artificial Intelligence (“AI”) is going to replace lawyers. As a lawyer who has been building AI for nearly 10 years, it seems proper to comment. In summary, my view is that the prediction that “ChatGPT will replace lawyers” is a take that does not truly understand either the role of law or technology, or both.

 

This Time Is Different

I have been building various types of AI for a number of years, including semantic searches, chatbots, and Large Language Models (“LLMs”). Although there have been predictions for many years that AI (or some other technology) is going to change law, the greatest piece of technological change to affect law in the past 40 years is the ability for word processors to copy, paste, and delete.

LLMs have been around for some time, you might be familiar with them in the auto-text on your phone. However, the easy-to-use textual interface of ChatGPT has enabled it to become the fastest growing web application of all time, with 100 million active users after just two months of its release, a feat that took Instagram two and a half years, and Facebook four and a half years.

ChatGPT is something that almost everyone can find useful and almost anyone can access easily.

 

The Rise of Predictions

Shortly after ChatGPT’s explosive adoption came the almost as explosive predictions of which industries (including law) it would replace. And when you see the fantastic strings of text that ChatGPT can create, why would you not be amazed and make fantastical predictions?

For what it’s worth, in my view, ChatGPT (and other LLMs) will lead to a 5% to 20% increase in productivity. Although that is a much less exciting headline than “replacing all lawyers,” such an increase in productivity is huge, and probably greater than the ability to copy, paste, and delete text.

In economic terms, this is a shift of the supply curve for legal services, which means that there will be a greater number of legal services that will be able to be produced given the fixed time of lawyers.

 

How LLMs Work

LLMs are algorithmic models trained on large datasets of language. By way of oversimplification, LLMs convert words into discrete identifying numbers (called “tokens”) and then determine and output a probabilistic mathematical relationship (“vectors”) between those tokens.

Based on this model, the LLM predicts the most likely next word. For example, if trained on a phone book and tokenised each letter, inputting “A” might lead to “D” for “Adam,” “B” for “Abraham,” or even a made up name like “ABCDE.” The better the model is trained, the better its results will be.

Interestingly, an LLM will not necessarily predict the most likely next word but adds an element of randomness, important for generating interesting outputs.

The generation of text is not a search. The text generated is unique and not determinable ahead of time. Asking the same question multiple times will likely lead to different answers. This also means LLMs might present incorrect answers with utmost confidence, a phenomenon known as a “hallucination.”

 

What Can LLMs Like ChatGPT Do?

In my view, ChatGPT is at the level of a university student who has attended all the lectures but hasn’t done the reading and is trying to bluff their way through the exam. Often, it will present an interesting and coherent sounding answer, and sometimes it will be unknowingly wrong.

But this skill level can be easily used. For example:

  • Whimsical use cases: Ask ChatGPT to write a poem in the style of Shakespeare or Lord Byron, it will amuse you.
  • Educational tasks: From the Higgs-Boson to HTML coding, it can explain processes simply and answer follow up questions.
  • Summarisation: Present a large body of text (e.g., a case) and get summaries in 500, 200, 50, or 10 words.
  • Legal drafting: Ask it to draft a legal document, but beware of issues with applicable law, style, coherence, and relevance.

LLM Use for Lawyers

The instructions given to an LLM are called “prompts.” Using sophisticated prompts to obtain useful text is “prompt engineering.” Some suggest prompt engineering is the new skill to learn, but in my view, it’s the new search.

Just as a specialist lawyer can obtain better results from Google than a layperson, so too can a specialist with prompt engineering skills outperform a layperson.

 

Tips for Better Prompt Engineering:

  • Specify constraints: Ask for only real case citations to avoid invented ones.
  • Assign a role: e.g., “You are a specialist taxation lawyer with 20 years’ experience. Give technical advice on Australian tax law.”
  • Ask for a chain of thought: This encourages accuracy and helps detect hallucinations.
  • Provide examples: Demonstrative input output pairs help guide the model.

You can go further with fine tuning, teaching the LLM new tasks by providing examples. This is how you can train it to draft contracts, wills, or summarise client facts into timelines. Fine-tuning can be done with relatively small samples.

Another method is embedding data, enabling the LLM to search, group, or classify it. This turns the LLM into a search engine.

Note: Embedding or fine tuning does not alter the underlying model (e.g., GPT-4). Retraining the model itself would cost tens of billions of dollars.

 

The Prisoner’s Dilemma

Humans often push machines to their limits to understand them, sometimes breaking them in the process.

Microsoft’s chatbot “Tay” was shut down after 24 hours when users got it to make offensive statements. To prevent similar issues, major LLMs like ChatGPT, Bing, and Google Bard have controls, conceptually like a prisoner in a cell communicating through a gaoler who reviews questions.

If the gaoler doesn’t like the question, they reply instead of the LLM. Offensive requests, bomb making instructions, legal advice, or internal company data are blocked.

Users have tried to “break” the gaoler using prompt injection attacks, tricking the system into bypassing constraints. Bing’s LLM was tricked into revealing its internal name: “Sydney.”

 

The Lawyer’s Dilemma

The best legal use case for LLMs is as a cheap and fast assistant, with the lawyer reviewing the work carefully. Just as a law clerk might miss technical matters, so too can an LLM. But proper review is a well established process.

Ethical problems arise when the LLM is consumer facing. The highest-profile example: Do Not Pay’s plan to use ChatGPT to script replies for a self represented litigant via AirPods. The plan raised numerous ethical and practical issues and the company now faces multiple legal actions.

More subtly, errors or hallucinations by an LLM create liability for the provider. Unlike a Post Office Will Kit, where user error is the issue, a defective template creates provider liability, potentially to a large class of consumers.

 

Ethical Issues

Care must be taken to differentiate between legal search (using embedded data) and customised legal answers, the latter is the provision of legal services.

Any human review of LLM output constitutes legal services. This is analogous to Quill Wills, where assisting testators with clause selection was deemed legal practice.

Technology companies providing consumer facing LLMs must ensure no human involvement in output generation, otherwise, they are providing legal services, which require admitted practitioners.

Where legal services are provided electronically, trust account regulations apply to funds received in advance. Subscription models or usage rights may constitute trust funds, a compliance burden for tech companies.

 

Safer Use Cases

Consumer-facing legal LLMs are better restricted to low-risk tasks. For example, I built software for domestic violence victims that converted unstructured facts into structured timelines for affidavits, a task well-suited to LLMs, with user review.

Pro bono use may also allow non-lawyer tech companies to provide legal services, if no fee is charged.

 

This Article Was Created By.

Adrian Cartland

Principal Solicitor at Cartland Law
Adrian Cartland, the 2017 Young Lawyer of the Year, has worked as a tax lawyer in top tier law firms as well as boutique tax practices. He has helped people overcome harsh tax laws, advised on and designed tax efficient transactions and structures, and has successfully resolved a number of difficult tax disputes against the ATO and against State Revenue departments. Adrian is known for his innovative advice and ideas and also for his entertaining and insightful professional speeches.