The UK asylum system is currently built on a paradox. It demands absolute consistency from applicants, yet it relies on a procurement chain for interpreters that systematically prioritises the lowest cost over professional accreditation.
When an unqualified interpreter, selected through layers of sub-contracting, misinterprets a nuance, the court does not log a translation error. It logs a witness inconsistency. This is not a glitch. It is the predictable outcome of a system designed to treat translation as an expendable commodity.
The Limitation of Traditional Research
For the past year, I have been building this investigation through interviews with barristers, solicitors, and professional linguists. The data is clear: the system is manufacturing contradictions that end up deciding asylum claims.
But there is a barrier to traditional investigative journalism: empathy fatigue. It is easy for a reader to gloss over the technical details of procurement chains or tender logic. It is much harder to look away when you are forced to experience the distortion yourself.
Follow the development of the simulator and get future research notes directly.
View the simulation methodIntroducing the "Meaning Collapse" Simulator
I am shifting this research into a new medium. I am currently developing an interactive, browser-based simulation that puts the user in the chair of an asylum seeker.
The simulation uses WebXR technology, real-time speech-to-text, and LLM-based interpreter agents to replicate the real-world conditions of an asylum hearing:
- The Latency Trap: You have limited time to communicate complex experiences.
- The Distortion Engine: Your words are processed by a cost-optimised model that introduces omission, substitution, editorialisation, and forms of algorithmic bias.
- The Record: You are then forced to confront your own testimony as it appears on the official court record, stripped of nuance and framed as contradiction.
Why This Matters for Investigative Research
My goal is not to build a game, but to create a high-fidelity tool for empirical demonstration. By algorithmically replicating the meaning collapse, I can show that the failures of the current system are not random human errors. They are the predictable result of the Home Office's chosen procurement architecture.
If we can quantify how quickly meaning drift occurs in a controlled simulation, we can build a stronger, data-backed case for why mandatory accreditation and real-time recording are not just nice-to-haves. They are prerequisites for a fair legal system.
Join the Development
I am building this as an open-research tool. If you are a developer interested in the intersection of LLMs and human rights, or a legal professional who wants to stress-test the simulation for realism, I want to hear from you.
FAQ
How does the simulation work?
It uses real-time speech-to-text and LLM processing to simulate interpreter-induced meaning drift.
Why is this important for asylum proceedings?
It demonstrates how translation errors become credibility findings in UK courts.