Ethics before efficiency

Artificial intelligence is already powerful, and it is becoming more powerful at startling speed. Power, whether held by a person, a government, or a machine, has a habit of causing mischief unless it is restrained. The difference with AI is that it has no conscience of its own. It does not know what is fair, decent, lawful, or humane. It follows instructions, detects patterns, and produces outputs. That is all. If it is to be used safely, then ethical limits must be designed into it from the start, not bolted on afterwards like an afterthought.

That, in essence, is what AI ethics means. It is the attempt to work out the principles and values that ought to govern the making and use of these systems, so that their benefits are secured without allowing their harms to run wild. UNESCO has identified the familiar themes: fairness, transparency, accountability, privacy, inclusivity, and the wider effects on society and the environment. Put more plainly, AI ethics is concerned with preventing the machine from becoming a polished instrument of bias, secrecy, carelessness, or abuse.

In the United Kingdom, this has so far been approached with a distinctly light touch. Unlike the European Union, which has moved towards statutory regulation, the British state has preferred guidance to law. That matters, because guidance, though not binding, often becomes a kind of soft law. It shapes behaviour, sets expectations, and may in time harden into firmer obligations.

Three government documents show the shape of this approach. The first was the 2019 guidance on understanding artificial intelligence ethics and safety, produced for public services. It urged ethical thinking by design and warned against familiar dangers such as bias and privacy intrusion. Its weakness was also obvious. It was high-level, non-binding, and written before generative AI changed the landscape. It set out good intentions, but little by way of enforceable standards.

The second was the Cabinet Office framework on ethics, transparency and accountability in automated decision-making, first issued in 2021 and later updated. This was more practical. It translated broad principles into expectations for departments using algorithmic systems. It spoke of avoiding unintended outcomes, delivering fair services, and ensuring someone could be held responsible. It was meant to stop the worst abuses of automated public decision-making, such as life-changing decisions being taken by a system that no one could explain and no human being properly supervised. Yet it remained guidance only. It created no rights and offered no real remedy if things went wrong.

The third, and most recent, was the 2025 AI Playbook for the UK Government. This is the broadest and most up-to-date of the three, and it plainly reflects the arrival of generative AI. Its ten principles range from understanding what AI is and what its limits are, through security, human control, procurement, skills, and lawful use. It is sensible and wide-ranging. It recognises that AI may generate harmful or prejudicial outputs unless bias is anticipated and corrected. But it too is limited. It is concerned more with process than with outcome. It tells officials to be careful, but not precisely how careful. It sets no measurable thresholds for fairness, accuracy, or acceptable risk. Most importantly, it imposes no sanctions.

From these documents, one can still draw a coherent set of principles. Ethical AI must be transparent enough to be understood, fair enough not to entrench discrimination, accountable enough that someone can answer for it, and supervised enough that a human being remains in control at critical points. It must be secure, respectful of privacy, continually tested, and directed towards some public good rather than mere convenience or unchecked efficiency. None of this is mysterious. Each principle is aimed at a familiar mischief: hidden decisions, systemic bias, data leaks, faceless irresponsibility, and machine-led error spreading before anyone has time to stop it.

These are not abstract worries. Without ethical guardrails, discrimination can become structural. Facial recognition systems have been shown to perform far worse on Black and Asian faces, with obvious and serious consequences where policing is involved. Predictive tools in criminal justice have been criticised for rating Black defendants as more likely to reoffend while underestimating risks posed by white defendants. In healthcare, algorithmic systems have underestimated the needs of Black patients compared with white patients presenting the same clinical picture. These are not harmless glitches. They are prejudice disguised as mathematics, which is often more dangerous because it carries the false authority of objectivity.

The legal world has already seen a cruder version of the same problem. Courts in several jurisdictions have had to deal with lawyers filing submissions containing hallucinated cases: citations that look plausible, sound plausible, and simply do not exist. The danger there is not only embarrassment. It is the corrosion of trust. When a machine produces falsehood in polished prose, and a lawyer repeats it without checking, the result is not innovation. It is professional failure.

That is why these questions matter particularly for lawyers. AI is no longer some exotic novelty sitting at the fringes of legal practice. It is already woven into research, drafting, communication, and administration. Used properly, it offers obvious gains: speed, breadth, convenience, and the ability to produce useful first drafts in moments. Used badly, it threatens confidentiality, accuracy, fairness, and the standing of the profession itself.

A lawyer handles material that may ruin reputations, affect liberty, or expose the most private corners of a client’s life. Confidentiality is not merely a technical duty under data protection law. It is one of the oldest obligations in professional life. AI complicates this because data may be processed offshore, stored in uncertain jurisdictions, or reused in ways the user cannot see. Sensitive material should only go into systems that are secure, compliant, and properly understood. Once trust is lost, it is not easily recovered.

Accuracy presents the next difficulty. AI can draft a pleading, summarise a case, or state a legal proposition in language that looks entirely convincing. That is precisely why it is dangerous. Every output has to be treated as a draft, never as authority. Every quotation must be checked. Every case must be read. The principle is old-fashioned enough: do not trust the headnote without checking the judgment. AI merely makes the temptation to ignore that rule much greater.

Bias also demands active vigilance. Because these systems are trained on data drawn from the world, they inherit the world’s prejudices. A barrister who uses AI without watching for unequal or discriminatory effects risks becoming complicit in them. However clever the software appears, responsibility remains with counsel. If a document misleads, if a submission is defective, or if advice goes wrong, it is the lawyer, not the machine, who carries the burden.

That in turn means competence must now include AI literacy. A lawyer need not be a coder. But he must understand what tool he is using, what it can and cannot do, and what risks follow from relying on it. Familiarity with AI will become as ordinary a professional necessity as familiarity with confidentiality, privilege, or data protection. Clients too should be told, at least in broad terms, when AI is playing a material role in their affairs. The age of governance is coming, whether through domestic regulation, professional rules, or pressures from abroad. It is wiser to build sensible habits now than to retrofit them in panic later.

There is, however, one final point, and it is not captured fully by any checklist. I would call it integrity. It is the habit of asking not only whether AI can be used for a task, but whether it should be. Some things should remain human because their value lies partly in their humanity. A handwritten note is not the same as a machine-generated message, however efficient. In the pursuit of speed, there is a risk that we flatten human dealings into mechanical transactions and call it progress.

That would be a mistake. AI will become more capable. It will settle more deeply into legal work and public administration. But the calculator never pretended to think, and it never fabricated a case citation. AI can do both, or at least appear to. That is why the ethical burden remains with us. For the modern lawyer, ethics in AI is not an optional extra. It is now part of professional competence itself. The tools may change, but the duties do not. They still belong to the advocate, and they cannot be delegated to code.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top