“AI won’t replace you – a lawyer who uses it will” is quickly becoming a cliché – but what about junior lawyers and the next generation(s)? A newly published study on legal contract review brings this question again into focus (Better Call GPT, Comparing Large Language Models Against Lawyers), as it concludes that “LLMs perform on par with LPOs [Legal Process Outsourcers] and Junior Lawyers” in terms of accuracy while completing the work at a fraction of both the time and the cost. A great excuse for me to finally publish (and update) this op-ed, which was initially written in April and May 2023 but that I refrained from publishing at the time.
We need to talk about how next generations of lawyers will be trained (and how many will be trained) to become capable of adding value – and why not just law firms but also clients should care.
[This is a personal opinion and should not be viewed as reflecting the position of Keller and Heckman LLP, but we are always thinking about how technology works – the firm’s tagline is “Serving Business through Law and Science”, with good reason.]
How an AI tool could replace Junior Me (in part)
When I started as a lawyer (at another law firm), my very first task was to review documents in the context of a due diligence process (where one company was looking to buy another). Over the course of three years at that law firm, I had done a whole lot of document review / due diligence work and a fair bit of drafting and revision of contracts and policies, next to some litigation and a handful of contractual negotiations. Whenever there was document review involved, my task involved looking for something in the document (for instance, whether there was a “change of control” clause, what were the start and end dates + renewal terms, etc.). Whenever I needed to write something, I was preparing a first draft (of a due diligence report, a contract, a policy, court submissions, a negotiation strategy note, etc.) that the partner or senior lawyer adapted.
How was I expected to understand what to look for or prepare that first draft? By looking at a couple of examples – “precedents” – that the firm in question had worked on previously, or public examples found online.
Afterwards, the partner/senior lawyer usually gave me access to an adapted version and I learned from my mistakes and the changes they had made (which was easier when changes were highlighted in a document – I know that I often had to make my own “track changes” comparison version as a junior to learn on my own). Sometimes they explained some of their changes, other times I had to proactively ask them to explain the changes.
Fast-forward to today, and artificial intelligence / augmented intelligence tools are becoming increasingly useful in that regard. Think in particular about LLMs and generative AI (like GPT4/Mixtral/LLaMa/…) and tools built around them, like AI-augmented document review tools, contract drafting and negotiation tools with suggested clauses and fallbacks, report generation tools. Some are now even integrated in law firms’ document management systems, allowing firm-specific training based on their own database of client advice, contracts and dispute documentation. Throw them in the mix, and what happens then?
The “Better Call GPT” study suggests that part of the work that I was asked to do as a junior lawyer could be done by such a tool – perhaps even without additional training. The key is ensuring that the “prompts” or instructions are clear and precise enough – that is not AI-specific, though, as it is equally relevant in relation to humans. I looked at and gave some advice on AI-powered document review solutions before, but these newer generation tools are pre-trained to a far greater extent because of the vast amount of source data. Whether that pre-training is lawful is of course a hot topic, but the practical consequence is that today you typically need to do less in terms of training of the AI tools that are available on the market.
So if that work can be done by a junior lawyer, what are the consequences? Alex Su had a great take on some of the potential staffing implications for law firms in his post “This time is different” (now behind a subscription wall), but the gist of it is that in his view “Biglaw” might get rid of some juniors (not all) and raise hours to keep profitability up.
Are lawyers even desirable in a world where a GPT Lawyer might exist?
First, it is worth stating something that may seem obvious to lawyers but might not be to all clients: yes, it is desirable that there are future generations of lawyers. A client might think that it is only interested in a response at a given time, but if the way of thinking of the lawyer or law firm works for the client, the client may wish to ensure that all future legal issues are handled in a similar manner. In this context, it is important for the client that not just one lawyer but several are able to handle legal issues in a similar manner (to account for absences, unavailability and even retirement at one point).
Could an AI tool do this too?
With recurring issues that are purely factual and do not require interpretation, perhaps. For instance, one of the first “AI lawyers” was a tool to allow the challenging of parking fines. The rules there are black or white: either you pay and are allowed to park, or you do not and are not (and you will therefore be fined). There are exceptions (there was a mistake in reading the number plate; the car was stolen the night before; etc.), but the legal assessment is generally straightforward.
With problems that arise out of the interpretation of the law, though, it is not so certain that AI tools will be able to do the same work, because this often requires a combination of creativity, understanding and critical thinking that is not (yet) emulated by AI tools (whether generative AI or predictive/discriminative AI). The “Better Call GPT” confirms this, as it showed that “a significant portion of inaccuracies [by LLMs] was linked to the misinterpretation of specific wordings”, while “the Senior Lawyers, serving as the benchmark for this study, showed exceptional care in interpreting subtle differences in language, indicating a sophisticated understanding of semantic variations”. In that context, it seems that situations in which the (human) lawyer will still easily beat an AI lawyer – for now – are strategic advice and litigation.
[To illustrate, several companies have a pool of law firms with which they work. I have regularly been asked by companies with such a pool of law firms to provide to them legal advice as a “second opinion” (even a “third opinion” in one case), because those companies are dissatisfied with whatever advice they received (often because it is too rigid or does not display sufficient creativity); in that context, the creative yet pragmatic advice that I provide on how to resolve an issue or move forward with a project is the kind of work that might be more difficult for AI tools to do for now.]
In other words, yes, lawyers are still desirable for various kinds of work. The multifunctional AI lawyer is not yet upon us. So we do need to think about the training of junior lawyers too.
What will the (remaining) junior lawyers then do? And how will they learn?
In today’s technological landscape, then, the key requirement for a lawyer to be useful is that he or she can provide added value to the client, beyond what the client might get from an AI tool or advice or templates that can be easily found on the Internet.
Most of the time, that added value doesn’t come from only doing document review or template-derived document drafting. My take is that it comes from the intersection of what a person has been taught and what he or she has experienced, coupled with their own individual logic and creative thought-process. Junior lawyers are not supposed to only do document review or template-derived document drafting – they need to be able to observe good advice being given, as well as mistakes being made (by themselves, by more senior lawyers or even by clients). Typically, this means that junior lawyers are also involved in client meetings, negotiations or pleadings, even in a passive capacity – often mainly in order to take notes.
Cut document review, note-taking and template-derived document drafting out of the picture altogether, though, and depending on the practice a large source of that junior lawyer’s current workload might need to be replaced with new tasks. This may for instance be more acute in certain teams that mainly support mergers and acquisitions, compared to a team in which legal research is a more important part of the workload (see below in that respect, though).
What those tasks will be, will rapidly be a question that junior lawyers themselves think to ask. The smartest junior lawyers who face the prospect of a large-scale AI deployment may choose to leave rapidly if they do not receive assurances that they will be able to grow within a firm in spite of the AI tools that are set to replace some or even much of their current day-to-day tasks.
Assuming that law firms want to continue training new generations (I know of some that do not, but that’s their choice), those young lawyers could of course be tasked with reviewing the output from the AI tool – though whether this form of document review makes sense economically will depend a lot on the quality of the output, the skill of the junior lawyer and whether the junior lawyer can obtain additional insights from the partner / senior lawyer.
“Teach them prompt engineering then,” some might say, but ultimately writing “prompts” for generative AI tools is basically writing context and instructions, a lot of which is getting automated too (so that in the future you won’t need to know that you get better results by e.g. adding to “take a breath” and to proceed “step by step”). That “skill” may become less useful as tools and agents become more specialised and as tips for better results are automatically integrated in the instructions.
You might think that legal research is a severe weakness too, based on the examples of lawyers who used tools such as ChatGPT to draft legal briefs and submissions and were later called reprimanded by judges for the reliance on fabricated, non-existent case law. Yes, this is a key weakness of LLMs, but it is not a weakness of all AI-powered tools. For instance, in Intellectual Property cases I have used certain tools (such as Darts-ip) that use machine learning for data extraction, and the output is then very easy to use in the context of legal research. Put differently, whether legal research is a task that can be at least facilitated by AI tools depends on whether you focus on GenAI or look also at the more “traditional” types of discriminative/predictive AI. For this reason, I expect that the legal research tasks of junior lawyers (much like paralegals) will remain relevant, but they may even then spend less and less time in finding what they seek as the AI component of legal research tools grows in capabilities.
One alternative for more entrepreneurial junior lawyers might be to get them more involved in (active) business development. Not simply collaborating with partners or senior lawyers on the drafting of articles or blog posts for publication but finding ways to bring in new business to the firm.
More generally, though, I wonder if an increase in the use of AI tools might not lead junior lawyers to do more “shadowing” of the partner / senior lawyer (likely an interesting way to learn for most young lawyers) – basically, a return to a more traditional approach to apprenticeship.
Some questions related to apprenticeship
A return to apprenticeship does raise various questions that are not new but now deserve even greater attention in this specific context. For instance:
- From a financial perspective, will clients object to paying for such shadowing? If so, would that not simply lead to fixed fees in which the junior lawyer’s time is factored, or alternatively higher hourly rates for the senior lawyers or partners?*
- Will that then impact billable hour targets and the calculation of the junior lawyer’s bonus?
- Does it work best if there are fewer junior lawyers per partner or senior lawyer, and which measures can be taken to make it work just as well in a more traditional “pyramid” structure with a greater number of junior lawyers per partner?
- How do you organise such shadowing in a hybrid/remote work environment?
* [This is already a longstanding question for client calls and meetings, as some clients balk at the idea of having more than one lawyer on the call. Where a billable approach is used, I have often involved junior lawyers in client calls in the following way: if the call is meant specifically for the client to obtain advice during the call, and the junior lawyer is purely passive, then there is no additional charge for the client as the junior lawyer is there only for training purposes; if however there is any follow-up, even if not anticipated initially, the call has served as a description of the context and instructions, so by having them present neither of us has to spend additional time explaining what happened during the call – in other words, if the junior lawyer had not been present it might have ended up being more expensive for the client based on a billable approach.]
Other questions
Many other questions remain, irrespective of how a law firm deals with its own use of AI tools. For instance, the use of AI tools by in-house legal teams could also have an effect. If they start relying on such tools, they can limit the amount of money they spend outsourcing such tasks to external law firms (and potentially get a quicker turnaround) and perhaps even internally, in the case of large teams. Yet too much reliance on AI tools presents its own risks (see above regarding areas in which AI tools are currently less capable; in addition, lawyers with industry expertise will know things an AI cannot know, no matter the training materials). There is also an HR element to this: the less money goes to law firms, the less is available to train (junior) associates who might be tempted later on to move in-house.
More generally, it goes back to a key principle of technology deployment: before deploying any kind of technology, you have to think about its implications. With AI tools, the same applies – law firms and in-house legal teams should think carefully about what deploying them will mean not only to their productivity and revenue but also to their workforce and their know-how and career development.
What do you think? Which questions would you like a/your law firm to examine in that context? Which tasks of yours as a junior lawyer would not be replaceable by AI tools in a few years? How much of your work would have been replaced? And how do you plan on helping junior lawyers continue to learn to add value?
I hope this op-ed will at least make you pause to think about these questions for a bit. Perhaps you can take part in the discussion, too?

