Prompt engineering

Prompt engineering is, in essence, computer programming in plain English. It is the art of writing instructions for an artificial intelligence system so that it produces something useful, rather than something vague, irrelevant, or simply wrong. Put like that, it sounds almost too simple to deserve much attention. In truth, it is one of the most important practical skills in using modern AI well.

There is a temptation to think of prompting as some new and slightly magical trick: a clever way of coaxing a machine into being more helpful. It is better understood as the latest stage in a long history. Once, instructions were given to machines by cogs and wheels. Later they were encoded on punched cards. Later still they were written in languages such as BASIC. Now, at least for many purposes, the instruction language is ordinary English. The means have changed, but the underlying problem has not. A machine must still be told what to do, and if it is told badly, it will do the wrong thing.

That is why the old principles of programming still apply. Instructions must be logical, precise and coherent. They should be detailed enough to guide the machine down the right path, but not so cluttered that the purpose is obscured. They should make clear what the task is, what the desired form of output is, and what constraints apply. A good prompt narrows the field. It channels the machine towards a useful result. A bad prompt leaves too much to guesswork, and guesswork is where inaccuracy begins.

This matters particularly because generative AI has a dangerous quality: it is often plausible even when it is wrong. Ambiguity in a prompt does not usually produce a helpful admission of uncertainty. It often produces a polished answer that looks convincing. In legal work, that is an obvious hazard. Hallucinated authorities, muddled reasoning, stale law and irrelevant digression all become more likely when the instructions are loose. Prompt engineering is therefore not a decorative skill. It is a practical means of reducing error.

A number of books and guides now deal with the subject. One of the more engaging introductions I came across was Nathan Hunter’s guide to prompt engineering. Google has also produced accessible material, including its Prompting Guide 101 for Gemini for Workspace. OpenAI’s GPT-5 prompting guide appeared later, though much of it is concerned with agents and more technical uses, which are less helpful to the ordinary lawyer. The common lesson running through these materials is straightforward enough. Use natural language. Be specific. Give context. Refine the prompt through iteration. Use the machine itself to improve what you have written.

That last point is one of the most useful. I often ask ChatGPT to improve my own prompts. One can simply say: create a detailed prompt for me to do X, Y and Z, no more than twenty lines long. The system will usually return a sharper and better organised instruction than the one first drafted in haste. One can then go further and ask for suggestions to improve that prompt, select what is useful, and refine it again. In that sense, AI can act as its own prompt editor. That is not cheating. It is sensible use of the tool.

For lawyers, the central lesson is that an AI system cannot infer what matters unless you tell it. One must begin with a defined aim. Is the task a summary, a draft, an explanation, an argument, a chronology, a note for oral submissions? The prompt should say so at once. If you ask merely for a summary of a judgment, you may receive a general academic overview. If what you actually need is something short, practical and forensic, then that must be specified. A prompt such as: summarise the judgment in X v Y for use in oral submissions to a High Court judge, focusing on the ratio and any observations supporting limitation, and keep it under 500 words, gives the machine far less room to wander.

Context is equally important. Law is shaped by setting. A question about costs after discontinuance, without more, invites a generic answer. A question that specifies the case is a personal injury claim, identifies the stage at which it was discontinued, and explains why it was discontinued, is far more likely to produce something useful. The machine works better when it knows where it stands.

Tone matters too. A skeleton argument, a client note, an article, a LinkedIn post and a speaking note all require different registers. Unless told otherwise, AI will often choose the wrong one. That is why audience and style should be stated explicitly. It is no use receiving a breezy blog-style answer when what is needed is a sober note for a solicitor, or a dense academic paragraph when what is required is something a client can understand.

Another powerful technique is role prompting. If one tells the AI to act as a costs draftsman instructed by the paying party in a commercial dispute, the answer is shaped by that perspective. If one tells it to act as appellate counsel, or as a cross-examination coach, or as a junior asked to prepare a first draft, the priorities change accordingly. The role does not make the answer correct, but it often makes it more focused.

Structure is also vital. Much legal work depends not just on conclusions but on the order of reasoning. If that is not required, AI may jump straight to a result without showing how it got there. Better prompts force the reasoning into stages: identify the elements of the cause of action, apply the facts, cite the leading authorities, then identify weaknesses or gaps. That produces something closer to legal analysis and less like unsupported assertion.

Exemplification is another useful device. Show the machine one well-drafted paragraph or one properly formatted point of dispute, and ask it to follow that pattern for the next five items. This is the digital equivalent of handing a junior a precedent and saying, do the rest like this. It is one of the best ways to keep consistency of style and format.

Prompting also works best as a conversation rather than a single act of dictation. The first answer is often serviceable rather than right. One should refine it. Omit the procedural history. Focus on limitation. Rewrite it as a Law Reports headnote. Cut it to 400 words. Add the weaknesses in the argument. That process of iteration is not a sign of failure. It is the method by which useful output is obtained.

Constraints are part of the same discipline. Word limits, exclusions, formatting rules, jurisdictional limits, temporal limits and citation requirements all help. If one wants the law as it stood before April 2013, one must say so. If one wants only English and Welsh authorities, one must say so. Otherwise the machine may mix legal systems, blend old and new law, or drift into irrelevance.

There are, however, two warnings that matter more than the rest. The first is verification. AI output must be treated with the same scepticism one would apply to an unfamiliar case note found on the internet. It may fabricate, misstate or omit. Asking it to identify statements needing authority or to flag possible inaccuracies can help, but the duty to check remains with the lawyer. The second is confidentiality. Unless one is using a secure, approved system, sensitive or non-public material should not be entered at all. Prompt engineering is not merely about getting better answers. It is also about using the technology safely.

In the end, prompt engineering is not a bag of tricks. It is a discipline. The best prompts combine several habits at once: a defined aim, clear context, the right role, tight constraints, iteration, verification and proper regard for security. With practice, these habits become instinctive. That is when prompt engineering begins to resemble any other professional craft. The point is not merely to make the machine speak. It is to make it speak clearly, usefully and to purpose.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top