AI Judges began with a practical question that felt less speculative than it first appeared. Court systems continue to struggle with procedural delay. Case backlogs stretch hearings across months and sometimes years. At the same time, artificial intelligence is increasingly integrated into legal practice. Lawyers use automated systems to draft documents more quickly. Administrative tools streamline preparation. If case production accelerates, adjudication must also keep pace.

Within this context, the idea of an AI judge often appears reasonable. It is framed as efficient, consistent, and objective. Unlike a human judge, it is presumed not to tire, not to hesitate, not to carry personal prejudice. Efficiency becomes a sign of progress. Consistency becomes a proxy for fairness.

AI Judges does not argue for or against this development. Instead, it stages the encounter.

The project imagines a near-future magistrates’ court in which AI judges operate within defined limits. Their jurisdiction is deliberately restricted to minor criminal cases. They do not preside over complex trials. They determine only guilt or innocence. Sentencing remains in the hands of human judges.

This boundary matters. The system is not presented as a total replacement. It is introduced as a practical intervention, a way of easing procedural strain. It is plausible. It feels administratively sensible.

Each hearing begins with a structured recap delivered in a measured, pre-recorded voice. The AI judge outlines the case through predefined variables: the offence, the verified evidence, prior history, behavioural indicators. The language is formal and procedural. It does not deviate. It does not improvise.

Participants receive different outcomes. The variation is subtle but visible. The system appears consistent, yet it responds to coded differences within its structured inputs. Tone, cooperation, and compliance exist within the logic of the system, even when framed as neutral data.

The voice of the AI judge is male and explicitly robotic. This choice was intentional. Judicial authority has historically been gendered. The figure of the judge, detached and rational, has often been associated with masculine legitimacy. The robotic quality reinforces this distance. The voice is stripped of warmth. It speaks without inflection. It performs neutrality through restraint.

The effect is not theatrical. It is procedural.

The performance takes place within a constructed courtroom. This spatial decision was critical. Legal authority does not operate solely through law. It operates through architecture and ritual. Elevated benches, controlled speech, insignia, and formal language produce legitimacy before any verdict is delivered.

A digital interface would position the AI judge as a tool. The courtroom situates it within an institution.

Participants stand before the judge. They receive a printed verdict slip styled in the visual language of official documentation. The typography is restrained. The layout is structured. The paper feels final. Authority becomes tangible.

The system itself operates through a visible Wizard of Oz structure. While the pre-recorded voice delivers the verdict, I enact the system behind a barrier in full view of the audience. The mediation is not hidden.

This visibility is central. It makes authorship explicit. The AI judge speaks as though autonomous, yet its decisions are authored. In real-world AI systems, authorship is distributed across programmers, training data, institutional priorities, and design choices. Responsibility becomes diffuse. In the performance, it is condensed into a visible figure.

The system includes moments of intervention. If a defendant contests verified evidence extensively or attempts to push beyond the structured recap, the AI judge identifies a mismatch and redirects the case to a human authority. It does not argue. It does not defend itself. It withdraws.

Appeals are available. However, requesting reassignment to a human judge involves significant delay. Waiting times extend close to a year. The participant must decide whether to accept the rapid verdict or endure prolonged uncertainty.

Efficiency becomes incentive.

This structure mirrors a broader institutional logic. Automation promises acceleration. Delay is framed as failure. Speed begins to resemble fairness.

Audience responses revealed how unstable that assumption can be. Some participants expressed confidence in the AI judge. The speed of the verdict felt decisive. The absence of hesitation appeared objective. Impersonality read as equality.

Others felt unsettled. The reduction of lived experience to structured variables seemed reductive. The lack of explanation raised questions about accountability. The authority of the system felt disproportionate to its transparency.

These reactions illuminate how neutrality is constructed. Structured language, procedural clarity, and spatial authority can stabilise belief. When decisions are delivered confidently and quickly, they can feel correct.

AI Judges does not claim that artificial systems are inherently more biased than human judges. Instead, it examines how bias is reorganised. AI does not eliminate bias. It redistributes it across systems that present themselves as neutral.

By making the operator visible, the project foregrounds authorship. It asks how readily we accept authority when it is framed as computational. It questions whether neutrality is an absence of bias or a performance of distance.

The work ultimately shifts attention away from whether AI judges are desirable and toward how easily they might be normalised. When efficiency is framed as progress and neutrality is convincingly staged, institutional trust can form quickly.

The encounter becomes less about technology and more about belief.