A detailed legal opinion published has raised serious concerns about the Home Officeβs use of artificial intelligence (AI) in asylum decision-making, suggesting that parts of the system may be unlawful, particularly where applicants are not informed that such tools are being used.
You can download the 84-pageΒ legal opinion here.
The opinion was prepared by leading barristers, including Robin Allen KC and Dee Masters, along with Joshua Jackson, and was commissioned by the Open Rights Group. It concludes that the use of generative AI in the asylum process may breach key legal obligations, including procedural fairness, data protection rules, and equality law.
According to the opinion, the Home Office is currently using two AI tools in asylum cases. The Asylum Case Summarisation (ACS) tool creates summaries of applicantsβ interview responses, while the Asylum Policy Search (APS) tool helps caseworkers search country-of-origin information. However, both tools generate new text rather than simply organising existing information, meaning they can filter, reshape, or potentially omit important details that are relevant to asylum decisions. Applicants are not informed that these tools are being used and are not given access to the generated content.
The findings suggest that this lack of transparency could allow asylum seekers to challenge decisions where AI has influenced the outcome of their case. The opinion highlights that the ACS tool produced inaccurate summaries in around 9% of cases during testing, while some users of the APS tool have also raised concerns about its reliability. It also notes that there is limited public information on how the accuracy of these systems has been assessed or whether sufficient safeguards are in place.
There are further concerns that decision-makers may rely on AI-generated summaries instead of reviewing full evidence, which could result in important facts being overlooked. This creates a risk that asylum decisions are based on incomplete or incorrect information, potentially leading to material errors of fact. The absence of safeguards requiring caseworkers to verify AI outputs against original evidence increases this risk, especially as applicants are not given the opportunity to review or correct these summaries.
The opinion also argues that the Home Office may be failing in its legal duty to properly assess the impact and reliability of these tools before using them in asylum determinations. This includes examining risks such as bias, discrimination, and whether non-AI alternatives could achieve similar outcomes without compromising fairness.
In addition, the use of AI in this context raises serious data protection concerns. The ACS tool processes sensitive personal information, including details about race, religion, political views, and sexual orientation, which brings strict obligations under UK data protection law. At the same time, the lack of a published Equality Impact Assessment means it is unclear whether the potential discriminatory effects of these tools have been properly considered.
Oversight remains limited, with regulators and independent bodies having restricted visibility into how these systems operate, reducing accountability and public scrutiny. The authors of the opinion emphasise that where AI is used in decisions affecting fundamental rights, there must be full transparency about how the systems work and how their outputs are used.
From a LawSentis perspective, this development highlights a growing and serious concern in UK immigration law. While technology can support efficiency, asylum decisions involve high-stakes human rights considerations and must be handled with extreme care. The lack of transparency, risk of inaccuracies, and absence of applicant access to AI-generated material raise clear legal and ethical issues. In our view, applicants should always be informed when AI is used, given access to relevant outputs, and allowed to challenge any errors. This issue is likely to lead to future legal challenges and could significantly impact how the Home Office uses AI in asylum cases going forward.