Artificial Intelligence Regulation is not about Machines

Much of the current discussion about regulating artificial intelligence begins in the wrong place. It starts with the technology: algorithms, models, training data, computational power. From there it moves quickly to classification schemes, risk tiers, and compliance frameworks. The result is regulation that appears technical, sophisticated, and reassuring. It is also, in important respects, beside the point.

Artificial intelligence regulation is not, at heart, about machines. It is about power: who exercises it, who benefits from it, and who bears the consequences when things go wrong.

This is not a novel problem. Law has always been concerned with the control of tools that amplify human capacity. Printing presses, industrial machinery, financial instruments, and medical technologies all forced legal systems to confront the same underlying questions. Who may use them? On what terms? With what safeguards? Artificial intelligence is distinctive in scale and speed, but not in kind.

One reason the debate so often goes astray is that AI systems are easy to anthropomorphise. They speak in fluent language, generate convincing images, and produce outputs that look uncannily like thought. This encourages the idea that regulation should focus on the system itself, as if it were an autonomous actor. That instinct is understandable, but misleading. AI systems do not have intentions, interests, or responsibilities. People do.

When harm occurs, it does not arise from an algorithm in the abstract. It arises from choices: choices about design, deployment, training data, commercial incentives, and institutional use. Regulation that concentrates on technical properties while ignoring those choices risks mistaking the surface for the substance.

Consider, for example, the notion of “high-risk” AI. Risk is often defined by reference to the sector in which a system is used: employment, credit, policing, healthcare, justice. This is sensible as far as it goes, but it obscures an important fact. The same technical system may be benign in one institutional context and harmful in another. Risk is not inherent in the code. It emerges from the way power is exercised through it.

This has implications for accountability. One of the persistent difficulties in AI regulation is the tendency to diffuse responsibility. Developers blame users. Users blame vendors. Vendors blame the opacity of the model. The law, however, is ill-suited to moral fog. It requires someone to answer for outcomes. Regulation that does not clearly identify points of responsibility is unlikely to command confidence, let alone compliance.

There is also a danger in assuming that more regulation necessarily means better regulation. Highly detailed rules can create the appearance of control while encouraging box-ticking and formal compliance. Organisations learn quickly how to satisfy procedural requirements without changing underlying behaviour. In fast-moving technical fields, this problem is exacerbated by the simple fact that regulation tends to lag behind practice.

A further complication lies in the international character of artificial intelligence development. Models are trained in one jurisdiction, deployed in another, and used across many more. Regulatory ambition often collides with economic reality. States wish to protect citizens without discouraging innovation or driving activity elsewhere. The result is frequently a compromise that satisfies no one entirely.

None of this means that regulation is futile. It does mean that its objectives need to be stated with care. The most important function of AI regulation may not be to prevent all harm, which is unrealistic, but to shape incentives. Law is often at its most effective when it changes what organisations find profitable, reputationally acceptable, or legally defensible.

From that perspective, transparency obligations, audit requirements, and clear liability rules may matter more than exhaustive technical standards. So too may professional norms. In regulated professions, expectations about competence, supervision, and disclosure can be as powerful as statute. Artificial intelligence will test those norms, but it will also be shaped by them.

For lawyers, this matters in two distinct ways. First, as advisers, they will be asked to interpret and apply AI regulation in circumstances that legislators did not fully anticipate. Secondly, as users, they will themselves be subject to emerging standards of reasonable professional conduct. The question will not simply be whether a particular use of AI was permitted, but whether it was responsible.

There is a temptation, when confronted with rapid technological change, to look for definitive answers. Regulation rarely supplies them. What it can provide is a framework within which judgment is exercised, challenged, and justified. That is true of artificial intelligence as it is of any other powerful tool.

Ultimately, the success or failure of AI regulation will depend less on how precisely it defines machines, and more on how clearly it allocates responsibility among people. Law has always been concerned with the latter. It would be a mistake to forget that now.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top