What lawyers get wrong about AI

Artificial intelligence has arrived in the legal profession accompanied by an unusual degree of noise. Conferences, newsletters, webinars, and product demonstrations all compete to explain what is coming, why it matters, and why immediate action is required. Much of this activity is well-intentioned. Some of it is not. Very little of it is especially helpful.

The first mistake lawyers commonly make is to think of artificial intelligence as something radically new. In truth, the legal profession has been using forms of automation and machine assistance for decades. Search engines, document comparison tools, predictive coding in disclosure, and even spell-checkers are all examples of machines assisting human judgment. Modern AI systems are more capable and more flexible than their predecessors, but they sit on a continuum rather than representing a clean break with the past.

The second mistake is to assume that artificial intelligence is best understood as a substitute for human reasoning. This is a category error. Contemporary AI systems do not “reason” in any meaningful sense. They identify patterns in large quantities of data and generate outputs that resemble human language, images, or decisions. That resemblance can be impressive, even unsettling, but it should not be mistaken for understanding. Treating AI outputs as if they were conclusions reached by a thinking mind invites error, complacency, and professional embarrassment.

A third error is the belief that artificial intelligence is either about to replace lawyers entirely, or that it can safely be ignored because it will never do so. Both positions are comforting, and both are wrong. AI will not abolish the need for legal judgment, advocacy, or ethical responsibility. Equally, it will change how legal work is done, how quickly it is done, and which tasks justify the time of a trained professional. The profession has navigated similar transitions before. It will do so again, though not without friction.

One of the more persistent misconceptions is that using AI is simply a matter of efficiency. The claim is often made that artificial intelligence will allow lawyers to do the same work faster and more cheaply, with no loss of quality. Sometimes this is true. Often it is not. Speed is not a neutral virtue in legal work. Faster drafting can mean less reflection. Faster research can mean less discrimination between sources. Faster production of advice can mean less time spent understanding the client’s real problem. AI tools magnify these risks because they lower the cost of producing plausible-sounding material.
Related to this is the tendency to treat AI systems as if they were junior lawyers. The analogy is tempting, but imperfect. A junior lawyer can explain their reasoning, recognise when they are out of their depth, and learn from correction. An AI system does none of these things. It produces outputs without insight into their correctness, relevance, or consequences. When it is wrong, it is wrong confidently. When it is uncertain, it does not know that it is uncertain. Supervision therefore requires more scepticism, not less.

Another mistake lies in the way lawyers talk about regulation. There is a tendency to assume that AI regulation is primarily a technical exercise: a matter of defining algorithms, classifying risk levels, and imposing compliance obligations. In reality, regulation of artificial intelligence is largely about institutional power and responsibility. Who is accountable when an automated system produces harm? Who decides what counts as acceptable risk? Which interests are protected, and which are exposed? These are legal and political questions, not engineering ones.

Some lawyers also underestimate how quickly habits form. Tools that begin as optional conveniences can become defaults with surprising speed. Once that happens, professional standards begin to shift. The question is no longer whether a particular use of AI is appropriate, but why it was not used. That change in expectation matters. It has implications for negligence, competence, and the definition of reasonable professional conduct. Ignoring that dynamic does not prevent it from operating.

There is, finally, a mistake of tone. Much writing about artificial intelligence adopts the language of inevitability. Change is presented as unstoppable, resistance as futile, and scepticism as ignorance. This is unhelpful. Lawyers are trained to interrogate claims of inevitability. They understand that technologies are adopted, shaped, constrained, and sometimes rejected through social and legal processes. AI is no exception. Treating it as such merely obscures the choices that are being made.

None of this is an argument against using artificial intelligence in legal practice. On the contrary, AI tools can be genuinely useful when deployed with care. They can assist with first drafts, summaries, issue spotting, and the organisation of material. Used properly, they can free up time for work that genuinely requires human judgment. The difficulty lies not in the tools themselves, but in the stories we tell about them.

The sensible response is therefore neither enthusiasm nor fear, but discipline. Lawyers need to understand what AI systems do well, what they do badly, and where responsibility lies when things go wrong. They need to be clear about when a task can safely be delegated to a machine, and when it cannot. Above all, they need to resist the temptation to outsource judgment itself.

Artificial intelligence will not make lawyers obsolete. It will, however, make sloppy thinking more dangerous, and unexamined assumptions more costly. The profession’s task is not to predict the future, but to maintain its standards while adapting to new tools. That is a challenge lawyers are well equipped to meet, provided they are willing to think clearly, slowly, and without illusion.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top