Artificial Intelligence, costs and proportionality

Artificial intelligence is often discussed as a means of reducing legal costs. Documents can be drafted more quickly, research can be conducted more efficiently, and routine tasks can be automated. From that observation it is tempting to draw a further conclusion: that AI will make litigation cheaper, and that proportionality will take care of itself. That conclusion is mistaken.

Costs are not driven solely by the speed at which tasks are performed. They are driven by choices: what work is done, how much of it is done, and what level of scrutiny is applied. Artificial intelligence changes the economics of legal work, but it does not remove the need for judgment about what is reasonable and proportionate.

The modern law of costs is centrally concerned with proportionality. It asks whether the costs incurred bear a reasonable relationship to what is at stake, taking into account the value, complexity, and importance of the case. That assessment has always been resistant to purely mechanistic approaches. Artificial intelligence does not alter that resistance. If anything, it makes the exercise more delicate.

One immediate effect of AI is to lower the marginal cost of producing material. Drafts can be generated quickly. Research memoranda can be expanded almost without limit. Chronologies, summaries, and alternative formulations can be produced in abundance. The risk is obvious. When production becomes cheap, restraint becomes harder. What was once avoided because it would take too long can now be done “just in case”.

This creates a paradox. A party may be able to show that individual items of work took very little time. Yet the overall volume of work may increase significantly. Proportionality is not defeated by efficiency. A thousand unnecessary steps remain unnecessary, even if each one is quick.

There is a further complication. AI-generated work often requires careful checking. Time saved at the front end may be spent at the back end verifying accuracy, correcting errors, and ensuring that the output is fit for purpose. That checking time is real time. It is not eliminated by automation. Where AI outputs are relied upon without adequate supervision, the risk shifts from cost to error, with potentially serious consequences.

From the perspective of assessment, this raises awkward questions. If a solicitor uses AI to produce a draft in minutes, but then spends hours reviewing and refining it, how should that time be characterised? Is it reasonable? Was it necessary? Would a competent practitioner have proceeded in the same way? These are familiar questions, but they arise in a new factual setting.

There is also the question of expectation. As AI tools become more widespread, parties may begin to argue that certain work should cost less, simply because technology exists that could have made it cheaper. That argument is superficially attractive. It is also incomplete. The existence of a tool does not establish that its use was appropriate in the circumstances, or that its output would have been reliable. Nor does it follow that a party is obliged to adopt the cheapest available method regardless of risk.

Conversely, there may be cases in which a failure to use appropriate technology renders costs unreasonable. If a task can be performed more efficiently without loss of quality, and if that method is widely accepted, then insistence on a slower and more expensive approach may be difficult to justify. This is not a new idea. The law has encountered it before in relation to electronic disclosure, document management systems, and digital bundles.

The difficulty lies in identifying where that line is drawn. It will not be drawn by software vendors. It will be drawn by professional norms, judicial expectation, and decisions on assessment. Those decisions will be fact-sensitive and incremental. They will not lend themselves to simple rules.

Artificial intelligence also has the potential to distort arguments about proportionality itself. The availability of powerful tools may encourage parties to pursue marginal points, elaborate lines of inquiry, or speculative arguments that would previously have been abandoned as uneconomic. The fact that such work can be done cheaply does not make it proportionate. Proportionality is concerned with necessity and value, not merely cost.

For costs judges, this presents a familiar but intensified task. They must look beyond time sheets and technology labels to the substance of what was done and why. The use of AI will neither justify excessive work nor condemn reasonable caution. The focus will remain on judgment, not novelty.

For practitioners, the lesson is a simple one. Artificial intelligence does not change the principles of costs. It changes the context in which they are applied. The obligation remains to exercise restraint, to focus on what matters, and to keep the scale of work commensurate with the dispute. AI may assist in meeting that obligation, but it cannot replace it.

There is a temptation to think of technology as an external force acting upon the law. In truth, the law of costs has always adapted to changes in practice. Artificial intelligence will be absorbed in the same way. It will sharpen some arguments, complicate others, and expose poor judgment more quickly than before.

Costs law is not hostile to efficiency. It is hostile to excess. Artificial intelligence makes efficiency easier. It also makes excess more tempting. The task for lawyers, and for those who assess their work, is to distinguish between the two with the same care as ever.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top