Examining the Efficacy of IRCC’s Use of AI

A recent article[1] discusses how IRCC has begun the process of modernizing Canada’s immigration system by integrating more AI and advanced data analytics. This is at least partly in response to the rising number of applications per year – IRCC making double as many decisions in 2022 than in 2021. As AI is used for more types of applications, IRCC’s position is that this use of technology will “better meet the needs of clients and Canada” including increasing processing efficiency and reducing employees’ workload.

IRCC Minister is quoted as saying the decision to accept or reject an application is always made by an officer. However, that statement is unclear about whether the reasons for decision are also only generated by an officer and not an AI algorithm. If the IRCC uses AI to create reasons for decision by combining template statements like “The purpose of the applicant’s visit does not appear reasonable given the applicant’s socio-economic situation”, that might lead to a situation that is at odds with the requirement of reasonableness of administrative decision-making. According to the Supreme Court, reasons for decision must explain how and why the decision was made.[2] Jurisprudence states that, while it is not unreasonable for visa officers to use template statements such as the one quoted above, it is unreasonable if the reasons for decision “are not responsive to the specific evidence and submissions at the core of each application.”[3]

Some risks are inherent with any AI. Given the intricacy of engineering, creating and setting up an AI-based system, as well as maintaining and repairing it, come with enormous costs. AI-based systems are, of course, vulnerable to cybersecurity threats such as viruses and hacking. Since humans create AI-based systems, bias can enter into the system in many ways – for example, by humans unintentionally inserting it into the system or because the test data, used to develop the system prior to implementation, was biased.[4]

One of the ways in which IRCC currently uses AI is to assess biometrics. Biometric data, such as fingerprints and photographs, is required to be submitted in most cases for work permit, study permit, permanent resident and various other visa applications. IRCC uses facial recognition technology to validate the authenticity of personal information submitted.[5] However, there are studies that point to divergent error rates for different demographic groups (including gender, race and age).[6] As a result, some US cities, such as Boston and San Francisco, banned their local agencies from using FRT.[7]

Another use of AI by IRCC is to handle responses to client enquiries. Automated customer service systems face disadvantages. The law is always evolving; and, as technology continues to change, it might become more difficult to update the system. Inflexibility is likely to result since IRCC can’t anticipate all questions and program all answers in advance. Feedback is necessary to ensure the ongoing development and success of such a system; yet, that would be time-consuming for both employees and clients. Lastly, its performance would require analysis and monitoring – thus creating more work for personnel.[8]

In conclusion, it is unclear at best if IRCC’s use of AI will better meet the needs of clients and Canada including facilitating easier processing and reducing employee workload. If AI is used to produce reasons for decision compiled of boilerplate statements, it might produce a higher number of unreasonable decisions which in turn would increase the number of applications for judicial review – increasing the caseload and cost on the judicial system and IRCC’s legal department. It is also uncertain how AI is cost-beneficial given the concerns it presents regarding implementation and ongoing costs, cybersecurity dangers and bias. The application of FRT to biometric processing involves well-studied risks. Lastly, the problems associated with using AI for client service would also need to be addressed. Without a doubt, an analysis that takes into account the numbers and all relevant factors would need to be conducted first before being able to verify that AI would reduce overall costs.

[1] Edana Robitaille, “Minister Fraser clarifies how IRCC uses AI in application processing”: https://www.cicnews.com/2023/05/minister-fraser-clarifies-how-ircc-uses-ai-in-application-processing-0535198.html

[2] Canada (MCI) v. Vavilov, 2019 SCC 65 (CanLII), para. 79. (https://canlii.ca/t/j46kb)

[3] Zibadel v. Canada (MCI), 2023 FC 285 (CanLII), para. 49. (https://canlii.ca/t/jvxrk)

[4] Eray Eliacik, “Can the cons of AI outweigh its benefits?”: https://dataeconomy.com/2022/08/30/the-cons-of-artificial-intelligence/

[5] Canadian Civil Liberties Association, “Facial Recognition Explained: How is FRT Used in Canada?”: https://ccla.org/privacy/facial-recognition-explained-how-is-frt-used-in-canada/#:~:text=FRT%20is%20being%20increasingly%20used,to%20deport%20individuals%20from%20Canada.

[6] Christelle Tessano, “Facial Recognition Technology in Canada : A Brief Overview of Harms and Potential Benefits”: https://www.ourcommons.ca/Content/Committee/441/ETHI/Reports/RP11948475/ethirp06/ethirp06-e.pdf

[7] Alex Najibi, “Racial Discrimination in Face Recognition Technology”:  https://sitn.hms.harvard.edu/flash/2020/racial-discrimination-in-face-recognition-technology/

[8] George Lawton, “9 disadvantages of self-service options”: https://www.techtarget.com/searchcustomerexperience/tip/6-reasons-why-self-service-options-fail

Leave a Reply

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Privacy Preference Center