CLA Publications

Risks and Potential of Canadian AI Legislation and Practices


This report is prepared by Meaghan Kelly, an international intern with CLA - Voice in Bulgaria for the summer of 2023. Meaghan is University of Toronto graduate student in European and Russian Affairs at the Munk School of Global Affairs, Centre for European, Russian and Eurasian Affairs (CERES). She has extensive experience as an office and legal assistant, and recently worked at a busy legal aid clinic assisting injured workers in Toronto, Canada. She has also provided communications and project support for qualitative public opinion research on top issues in Canadian politics and worked multiple elections. Meaghan has a BA in Cultural Studies with a specialization in Writing & Narrative and has also worked as a journalist, editor and proofreader.

Prepared for: Center for legal aid - Voice in Bulgaria

Prepared by: Meaghan Kelly

November 20, 2023


Across borders, artificial intelligence is being used as a tool in border control and migration processes. In the European Union, legislation was enacted to help regulate artificial intelligence as of June 2023. An extensive report by Voice in Bulgaria titled “Statement of “Voice in Bulgaria” on the “Regulation of the European Commission and of the Council laying down harmonised rules on Artificial Intelligence – Artificial Intelligence Act” and its application in Migration”[1] addresses this law extensively. This report intends to cover the Canadian migration context of Artificial Intelligence. In line with common practice and analyses, this report will use “AI” as a catch-all phrase which includes machine learning, predictive analytics, automated decision system, advanced data analytics, automation, etc. The primary context to consider in Canada right now is the development of a risk-based legislation to regulate AI and the consideration and resistance to tools that invade privacy and could even cause harm to non-citizens coming to or living in Canada.

Canada is in the process of developing a national regulatory framework on Artificial Intelligence to guide and legislate all sectors called the Artificial Intelligence and Data Act (AIDA). The legislation is part of Bill C-27, known as the Digital Charter Implementation Act, 2022 (An Act to enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to make consequential and related amendments to other Acts). [2] The framework of this act will involve a two-year consultation plan, so the earliest date possible for assent of the law would be in 2025.

Artificial Intelligence in Canadian Immigration Practices

One of the objectives of AIDA is to regulate and make standard requirements for the design and development of AI systems and prohibit and address potential harm and biased output. Canadian legislation is like the EU regulation in that it operates through a “risk-based approach.” A “high impact” AI system would consider multiple key factors, including “severity of potential harms” and “imbalances of economic or social circumstances.” [3] AIDA provides some information and assurances on the use of automated technology in immigration applications. AI is not given the authority to reject claims and make final decisions. Immigration Refugees and Citizenship Canada (IRCC) claims that automated systems only assist the judgement of human decision-makers. The IRCC Minister Sean Fraser addressed this at a press conference: “just to dispel any fears that may exist, a human being is still responsible for every final decision.”[4]  The IRCC currently uses AI tools like Chinook and Advanced Data Analytics (ADA) to process and assess applications including study permits, work permits and temporary resident visas (TRVs). This is intended to “improve administrative decision-making processes, assist or replace personnel, increase efficiency and reduce the processing time for applications.”[5] AI is used as a “sorting mechanism” to “assist with the decisions-making for visa applications by analysing an applicant’s personal information, including their work history and education.”[6] AI can be used to improve expediency which is a meaningful advantage, particularly if it means the reduction of long family separations. Since 2018, usage of AI to sort applications has resulted in an 87 per cent increase in assessment times.5


Beyond standard immigration processes, the Canadian government has sought an “Artificial Intelligence Solution” in even more sensitive arenas like Humanitarian and Compassionate applications and Pre-Removal Risk Assessments. This is concerning to some legal experts as the stakes are so high. It is often “a last resort by vulnerable people fleeing violence and war to remain in Canada.”[7] The current status of the use of AI in these applications is not clear. There is an absence of readily available literature on present-day operations and the intentions of the Canadian government on how AI will be used in this vital, potentially life-saving processes.­­­

The Standing Committee on Citizenship and Immigration, an overseer of multiple immigration and refugee boards in Canada’s parliament, stated that “these systems are not used to refuse applications or deny entry to Canada” and the IRCC does not use “black-box algorithms”[8] or “complex algorithmic systems that make decisions in unknowable or unexplainable ways.”[9] A black-box algorithm is “one where the user cannot see the inner workings of the algorithm.”[10] The ‘black box’ has also been used by journalists, refugee lawyers and academics as a metaphor for the workings of the IRCC and its enforcement wing, the Canadian Border Services Agency (CBSA). These groups have submitted Freedom of Information requests, “but have found their efforts blocked or delayed indefinitely.”[11] The CBSA is viewed to be especially opaque by refugee lawyers, journalists and academics: “The CBSA operates with little transparency under the cover of national security and border control rationales, and without meaningful mechanisms of oversight.”[12]

Human Rights Considerations

A lack of oversight is dangerous and worrying given key problems in Canada’s immigration and refugee system, such as indefinite detention of claimants. Canada, unlike the EU or the United States, has no time limit set for immigration detentions, with devastating consequences for the incarcerated individuals.[13] Despite statements that AI is not a decision-maker, the high level of vulnerability in refugee claimants still raises the alarm for human rights observers, as automated tools have access to biometrics and data that could trigger biased information that could lead to decision-making. The ground-breaking report “Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System” by experts on refugee law, human rights and artificial intelligence from The University of Toronto International Human Rights Program and the Munk School of Global Affairs and Public Policy, skillfully lays out the legal context, highlighting how the “experimental” nature of this technology is being deployed on already vulnerable populations.[14]

Flawed decisions made through the involvement of automated processes could still cause significant harm - as they do when they are made by human officials. Bias can be programmed by biased individuals. Issues of bias are at the forefront of concerns for human rights observers and legal experts, who have expressed concern over a lack of protections from AI technologies such as Facial Recognition Technology (FRT) and how data collection, consent and biometric data will be managed. For example, the Canadian Border Services Agency (CBSA) received $656 million from the government to develop and use new technology such as facial recognition, in border security.[15] The CBSA has spent considerable resources on facial recognition software. The current trajectory of this is unclear.

An investigation by the Privacy Commissioner of Canada in 2021 found that tech company Clearview AI had collected highly sensitive biometric information without the knowledge or consent of individuals.”[16] This type of technology has had detrimental consequences for refugees, even those already in Canada with a successful claim. One example is an African woman who had her refugee status revoked when she went to update her driver’s license photo identification. Her image and identity were falsely identified through FRT as another woman in the database. The Ministry of Transportation reported this to the IRCC. This example proves further how refugees, migrants and asylum seekers are especially at risk.  

Border officials have claimed that facial recognition is not used on asylum seekers; lawyers have argued otherwise based on their cases. For example, two Somali women became refugees in Canada, one in 2017 and the other in 2018, but they were denied their refugee status due to facial recognition practices that identified them as Kenyan instead. Fortunately, they appealed and won in court. Their lawyers pointed out, and this must be considered across the board in consultations, that “facial recognition software is unreliable and particularly flawed in identifying darker-skinned females in research studies.” [17]  The bias is stark. Facial recognition technology has a 34% error rate in identifying Black women compared to 0.8 % for white males.[18] So while these privacy violations impact all Canadians and residents in Canada, the ramifications for non-citizens could be massive, but ultimately impact many people, regardless of status.

Invasive Technology

A lie detection system called Automated Virtual Agent for Truth Assessment in Real Time, or AVATAR was tested at the CBSA Science Laboratory by Border Services and was a cause for significant concern, however, after testing they have since stated they are not currently using it at the border [emphasis added].[19] The type of testing, and how they reached that conclusion is not available. Nor is it clear if shelving lie detection systems is a temporary or permanent decision. While expediting applications is important, there need to be more assurances and evidence that refugees and migrants will not be targeted, experimented on and policed via artificial intelligence.

It is even more frightening when invasive technologies are wielded by the RCMP, Canada’s national police force and the agency responsible for border security and monitoring ports of entry. The RCMP’s website on border security states: “Using the latest and most advanced technology to monitor the border, the RCMP communicates with other law enforcement agencies on both sides of the border, and enhances the efficiencies of operations and intelligence gathering.”[20] The RCMP has incredible powers over the lives of refugees and immigrants crossing the border, to monitor and detain, with access to their personal and biometric data. So it is extra alarming that the Canadian Privacy Commissioner “found that the RCMP committed a “serious violation” of Canadians’ privacy by conducting searches of Clearview AI’s facial recognition database, which contains billions of photos of people scraped from the internet, including from social media sites.”[21] Clearview AI was found to be conducting mass surveillance through the non-consensual collection of biometric data. [22] A related Office of the Privacy Commissioner investigation looked into Clearview AI’s use of facial recognition. Clearview AI is no longer operating in Canada - the RCMP were their last

clients here, ending in July 2020, but the threat of similar projects remains.


Civil society organizations and observers should closely follow Canada’s pathway, especially as creators and benefactors of AI technology. Canada is not just a user of digital technologies but a developer; home to start-ups, research labs, and investors.[23] There is a domestic economic advantage. There is also a responsibility for Canada to manage AI ethically. Canadian law and practice in relation to artificial intelligence and refugee law will have international ramifications. Recognition of potential harm, and consultation into how to mitigate this harm, is one welcome element of AIDA. Canadian magazine “The Walrus” interviewed legal expert Petra Molnar, co-author of “Bots at the Gate”, who is frequently in the media and writing reports and opinion pieces on artificial intelligence, refugees and borders. She is “concerned about financial interests playing a role in determining which systems get implemented in border control and how. Governments rely on private companies to develop and deploy tech to control migration, meaning government liability and accountability are shifted to the private sector.”[24] This analysis is important to consider in the introduction of AIDA. With the track record of border services in Canada, and lack of transparency around the status of tools and usages, there needs to be concrete and well-executed legislation and policy to ensure that the human rights of migrants will not be violated. Hopefully AIDA can work towards this, but this should not be taken at face value.


[1] “Regulation of the European Commission and of the Council laying down harmonised rules on Artificial Intelligence – Artificial Intelligence Act” and its application in Migration,” Voice in Bulgaria, June 18, 2023.

[2] “Bill C-27: An Act to Enact the Consumer Privacy Protection Act, the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act and to Make Consequential and Related Amendments to Other Acts." Government of Canada, November 4, 2022.,for%20commercial%20activity%20in%20Canada.#:~:text=The%20Consumer%20Privacy%20Protection%20Act%20would%20repeal%20parts%20of%20the,for%20commercial%20activity%20in%20Canada.

[3] “The Artificial Intelligence and Data Act (AIDA) – Companion document,” Government of Canada, last modified March 13, 2023.

[4] “Immigration minister says AI isn't making final immigration decisions,”, May 26, 2023.

[5] “Artificial intelligence and Canada's immigration system,” Reeva Goel and Sergio Karas., April 20, 2023.

[6] “Data security and bias among primary concerns with AI in immigration law: Sergio Karas” April 3 2023,automated%20tools%20by%20immigration%20officers

[7]“Governments’ use of AI in immigration and refugee system needs oversight” October 16, 2018

[8] “CIMM – Question Period Note - Use of AI in Decision-Making at IRCC – November 29, 2022,” Government of Canada, August 25, 2022.


[10] “What is Black box algorithm,” Arimetrics, Accessed Oct 28 2023  

[11] "When Border Security Crosses a Line." The Walrus. February 17, 2021.

[12] Clear safeguards needed around technology planned for border checkpoints” Liew, Jamie and Molnar, Petra. May 5 2021.

[13] “Rectifying the Wrongs of Indefinite Immigration Detention in Canada,” Natasha Anzik and Norman Yallen, Asper Centre, University of Toronto, undated.,unable%20to%20confirm%20their%20identity.

[14] “Citizen Lab, “Bots at the Gate.” 2018.

[15] Clear safeguards needed around technology planned for border checkpoints,” CBC.

[16] Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say,” Office of the Privacy Commissioner of Canada, last modified February 23, 2021.



[18] “Study finds gender and skin-type bias in commercial artificial-intelligence systems”

[19] “Clear safeguards needed around technology planned for border checkpoints” Liew, Jamie and Molnar, Petra. May 5 2021.

[20] “Border Security,” Royal Canadian Mounted Police. Last modified: October 12, 2018.

[21]Nicholas Keung. “Did Canada use facial-recognition software to strip two refugees of their status? A court wants better answers” Toronto Star. September 19, 2022.

[22] “Clearview AI’s unlawful practices represented mass surveillance of Canadians, commissioners say,” Office of the Privacy Commissioner of Canada.

[23]“Canada concludes inaugural plenary of the Global Partnership on Artificial Intelligence with international counterparts in Montréal,” Government of Canada. December 4, 2020

[24] Beaumont,  Hilary. "When Border Security Crosses a Line." The Walrus. February 17, 2021.