There Is a Clear Need for Scrutiny and Oversight with AI in Immigration Programs

The increasing popularity and use of artificial intelligence (AI) and machine learning to augment decision-making processes has significant implications for human rights. Nowhere is this more evident than in Canada’s immigration and refugee system, with numerous researchers, advocates, and law professionals raising concerns over the potential for serious breaches in human rights and procedural fairness.

This past September, human rights and refugee lawyer Petra Molnar co-authored a report titled “Bots at the Gate” on Canada’s use of these tools, which dates back to 2014.

Read the full report here: Bots at The Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System

While there’s no doubt that the volume of applications to Canada’s system places a significant demand on decision-makers, using AI tools presents numerous concerns.

Immigration and refugee decisions involve the exercise of discretion and administrative law clearly requires that such discretion is exercised independently and fairly. Moreover, the decision-making process by administrative decision-makers involves complex analysis, careful consideration of all evidence presented and a reasonable chain of reasoning. This is a role that, of course, requires intelligence and capacity of a human being and cannot be substituted by AI.

The potential for serious harm to individuals in situations that require the utmost human discretion should place these tools under serious scrutiny with serious oversight.

The Risks of Automating Human Nuance

At its core, automated decision-making attempts to replicate human analysis of data, generally with an algorithm or AI. An algorithm is a set of instructions designed to organize and process a set of data to produce the desired outcome, while AI is a machine process that mimics human cognitive function, such as problem-solving, learning, or making a decision based on available data.

By defining these systems, the risk is immediately apparent: how can they account for human nuance and discretion? How can an algorithm sort data aid in compassionate or humanitarian application processing? This poses serious ethical concerns, not least of all due to the potential for bias and prejudice influencing algorithmic processes.

Clear Need for Scrutiny, But No Obvious Solutions

As big an issue as it is, the use of AI in immigration systems has no immediate or clear solution. It’s likely that the federal government will continue to explore, fund, and develop AI solutions, especially as the number of refugee claims reaches new heights. Domestic and legal obligations and promises show that immigration is not likely to stop any time soon, nor is it likely to slow down.

What’s more, Canada’s backlogged immigration and refugee systems are one of the chief reasons why AI systems hold appeal. They automate time-consuming, costly tasks, helping human staff sort through the backlog at a faster rate than before.

But at the same time, while greater scrutiny and transparency is needed for these systems and processes, some opacity must be maintained. Fully transparent source code, something Molnar and her co-authors recommend, opens the door for exploitation of immigration systems. Another recommendation Molnar makes could present a viable alternative: an independent oversight committee or regulatory body.

This is a new ground of debate for Canada’s immigration and refugee system. Due to the impacts these decisions have on the lives of applicants, though, it’s imperative that Canada does its due diligence well before relying on AI in its administrative decision-making.