Maria Clara Pina (master’s student in Human Rights at the School of Law of the University of Minho)
I.
Currently, in the so-called era of techno-solutionism,[1] digital technologies, including Artificial Intelligence (AI), have become widely used.[2] We are witnessing the emerging but rapidly evolving phenomenon of border management and control through the use of new technologies[3] and automated individual decision-making (Article 22 of the General Data Protection Regulation, henceforth “GDPR”),[4] which employ AI, and promise faster and more efficient decisions. However, these systems have the potential to harm human rights. Migration is becoming a transaction that requires migrants to exchange biometric and biographical data for access to resources or a jurisdiction – and to be seen as people[5] with inherent rights and dignity.
At the same time, the number of migrants in the European Union (EU)[6] is growing, making it worthwhile to analyse the impact of these technologies and their regulation (or lack thereof), given their inevitable and rapid evolution, but, above all, the constant character of the migratory phenomenon over time, and the vulnerability inherent to the status of migrant. In this context, complex legal challenges arise, requiring the analysis of the EU regulatory framework on the use of AI in the context of border management, asylum and migration, considering the main gaps within the AI Act[7] and its far-reaching implications on the human rights of migrants.
II.
The AI Act stands as the first comprehensive regulatory instrument on AI, positioning the EU at the forefront of global AI governance.[8] Its emergence is closely linked to rapid technological advances, enhanced by the progress of machine learning, the ability to train algorithms, and the availability of extensive databases. This instrument is an integral part of the European Digital Strategy,[9] aiming at digital innovation and the development and implementation of technologies that improve daily life – which reflects a trust-based and human-centred approach.
Moreover, the Regulation paves the way for AI to be placed at the service of human progress,[10] ensuring additional protection of fundamental rights, such as the protection of privacy and personal data, asylum, non-refoulement, non-discrimination and effective judicial protection [Articles 7, 8, 14, 19, 21 and 47 of the Charter of Fundamental Rights of the European Union, henceforth “CFREU”; Recital 6 and Article 1(1) of the AI Act], while prohibiting the use of AI systems to circumvent international obligations arising from the 1951 Geneva Convention[11] and the 1967 Protocol.[12]
The AI Act follows a proportionate risk-based approach (Recital 26 of the AI Act), imposing a gradual scheme of restrictions and obligations on providers and users of AI[13] systems (Article 2 of the AI Act), depending on the risk that their application entails for health, safety or fundamental rights.[14]
III.
AI systems posing unacceptable risks are prohibited (Recital 28 of the AI Act). However, such prohibition is not absolute (Article 5 of the AI Act), as it allows controversial exceptions that have fuelled intense political debates.[15]
Accordingly, assessments of natural persons intended to evaluate or predict the risk of a person committing a crime, based solely on their profile [Article 3(52) of the AI Act and Article 4(4) GDPR] or on the assessment of their characteristics and personality are prohibited [Article 5(1)(d) of the AI Act]. This prohibition aligns with the presumption of innocence (Recital 42) and is relevant in the context of the Visa Code,[16] where entry or the granting of a visa to a third-country national may be denied if that person is considered to be a threat to the public order or internal security [Article 32(1)(a)(vi) of the same Code].[17] While profiling is exceptionally permitted by the GDPR, the use of AI for this purpose is prohibited. A person should only be considered a suspect of a crime if such suspicion is based on a human assessment of objective facts (Recital 42).[18]
Furthermore, AI systems for creating or expanding facial recognition databases by randomly collecting images from the Internet or CCTV footage are prohibited [Article 5(1)(e) of the AI Act], protecting privacy and preventing mass surveillance.[19]
Biometric categorisation systems for individuals (Recital 16 of the AI Act) based on their biometric data [Recital 14 and Article 3(34) of the AI Act] to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sexual life or orientation are also prohibited [Recital 30 and Article 5(1)(g) of the AI Act], except for the labelling and filtering of lawfully acquired biometric data sets in the field of law enforcement. This distinction is particularly relevant as EU Member States are increasingly using technology to test the safety and identities of asylum seekers.[20]
Finally, real-time remote biometric identification systems [Article 3(41) of the AI Act] in publicly accessible spaces are prohibited [Article 5(1)(h) of the AI Act], except when deemed necessary for specific purposes outlined in the Regulation [Article 5(2) of the AI Act].
It should be noted that the list of prohibited AI uses and systems is not exhaustive, and these practices may be prohibited by other legal instruments. In particular, the limited prohibition of decisions based solely on the automated processing of personal data (Article 22 of the GDPR) should be highlighted as a prohibition that has relevance in the context of the use of AI systems but is outside the scope of the AI Act. In addition, general prohibitions such as the prohibition of discrimination apply hand in hand the AI Act.[21]
IV.
High-risk AI systems (Recital 46 and Article 6 of the AI Act) though not prohibited, can pose serious risks to health, safety or fundamental rights of individuals in the Union (Recital 48 of the AI Act) and are intended to be used by, or on behalf of, competent authorities, EU institutions, bodies, offices or agencies. This includes the AI systems (Recital 52 of the AI Act) listed in Annex III(7) of the AI Act [Article 6(1) of the AI Act].
In the area of migration, asylum and border control, ensuring accuracy, transparency and non-discrimination in decision-making is essential. As acknowledged in the Regulation, AI systems deployed in this domain affect individuals who are in a particularly vulnerable position and who are dependent on the outcome of the actions of the competent public authorities (Recital 60 of the AI Act).
Such systems include polygraphs or similar tools [Annex III(7)(a) of the AI Act] and systems to assess the security, irregular migration or health risk of a person who has entered or wishes to enter the EU territory [Annex III, (7)(b) of the AI Act]. Additionally, systems to assist the competent public authorities in analysing applications for asylum, visas and residence permits and related complaints regarding the eligibility of individuals applying for a certain status – including related assessments of the reliability of evidence – are also considered high-risk [Annex III(7)(c) of the AI Act].
Lastly, AI systems for the purpose of detection, recognition or identification of natural persons other than the verification of travel documents fall within the high-risk category [Annex III(7)(d) of the AI Act]. In this context, Automatic Border Control systems are employed to compare the facial features of the person seeking entry with the photograph stored on the identity document, as well as with biometric data stored in large-scale data systems. While these systems do not yet fully integrate AI, a Frontex[22] report suggests exploring its potential to detect threats such as morphing attacks.[23]
Given the possible serious implications of these AI systems, they must comply with stricter requirements, such as risk management (Article 9 of the AI Act), transparency (Article 13 of the AI Act), human oversight (Article 14 of the AI Act), cybersecurity, accuracy and robustness (Article 15 of the AI Act), data quality and governance, as well as training and testing of the systems (Article 10 of the AI Act), registration in an EU database (Articles 49 et seq. of the AI Act), and a prior assessment of the impact on fundamental rights (Article 27 of the AI Act).
In addition to the obligations imposed, the AI Act safeguards individual rights for people affected by such systems. Notably, it guarantees the right to clear and meaningful explanations about the role of the AI system in the decision-making process and the main elements of the decision (Article 86 of the AI Act). This serves as a guarantee of the right to effective judicial protection (Article 47 CFREU), reaffirmed by the Court of Justice of the European Union (CJEU) in the Ligue des Droits Humains case,[24] which addressed the automated risk assessment based on the PNR system.[25] The CJEU reinforced the need for transparency and access to information, having determined that the affected person must be able to understand how the decision criteria and programs used operate, in order to decide, with full knowledge of the relevant facts, whether to contest their unlawful or discriminatory nature.[26]
V.
Although the AI Act represents a significant step, we nevertheless argue that it falls short in certain areas, with potential detrimental impacts on the human rights of migrants. In fact, the list of prohibited AI systems seems far from complete. Some high-risk systems, due to the unacceptable risks that arise from them, should be prohibited. On the other hand, the list of high-risk systems also appears to be incomplete, and problems persist in ensuring transparency and human supervision in the migration sphere.
We therefore argue that both the list of prohibited AI systems and Annex III of the AI Act should be amended, under Articles 7, 97 and 112 of the AI Act, which allow the Commission to update the list in line with technological developments and social needs, ensuring that legislation remains aligned with technological advances and societal expectations. We will address some of the limitations, gaps and exceptions of the Regulation in the following sections.
VI.
Large-scale IT systems (Annex X of the AI Act) were originally built for more restricted purposes, but over the years and through various legislative changes, their purposes have expanded. These systems have become increasingly focused on border control and interact within a framework of interoperability,[27]/[28] which leads to personal data being widely shared between systems, government departments and States in the EU’s integrated border management.[29] The ultimate aim of these systems is to safeguard and promote the EU’s general objectives of enhancing security, facilitating cooperation and promoting the free movement of people between Member States.
An example of automated risk assessments, algorithmic profiling of third-country nationals, and interoperability of systems is the European Travel Information and Authorisation System (ETIAS),[30] which is expected to become operational by mid-2025,[31] and requires pre-screening to determine whether travellers pose risks of security, irregular migration or health. In this system, applications will undergo background checks against data already present in systems such as SIS, VIS, Eurodac, EES, ECRIS-TCN and ETIAS itself, Europol and certain Interpol databases. In addition, certain personal data will be compared with screening rules developed by Frontex, enabling the profiling of third-country nationals [Article 4(4) of the GDPR] based on risk indicators. If the comparison triggers an alert, the request must be processed manually by the ETIAS National Unit of the responsible Member State (Articles 21 and 22 of the ETIAS Regulation).[32]
So far, this system does not involve AI within the meaning of Article 3 of the AI Act, but a report by eu-LISA[33] suggests using AI systems to detect suspicious requests. It should be noted that for these large-scale systems, the requirements of the AI Act, without prejudice to the application of Article 5, will only come into effect in 2030 (Article 111 of the AI Act). This gap, although understandable, since the interoperability architecture is still under construction, can be problematic due to the lack of transparency and wide leeway for the use of AI systems in this context,[34] further threatening the fundamental rights of migrants.
This type of system raises concerns regarding the rights to privacy and data protection (Article 8 of the CFREU). Large amounts of personal data are collected, stored, cross-referenced and analysed, which encourages the continuous collection and examination of personal data in automated risk assessment systems. This encompasses diverse types of data, such as social media activity, financial transactions and location information.[35] These practices may become intrusive, and, consequently, must be guided by transparency and adhere to well-defined, legitimate purposes [Article 5(1)(a) of the GDPR].
Furthermore, sometimes neither the developer nor the user fully understand the reasons that lead to certain results.[36] Even when the reasoning behind specific outcomes is clear to those developing the system, this does not necessarily ensure the level transparency required for migrants, potentially compromising their right to effective judicial protection (Article 47 of the CFREU). In fact, in these cases, complaint mechanisms will be insufficient to protect individual rights, since those who do not have complete access to the underlying data and logic of the system will be unable to contest it.
Additionally, a well-known problem associated with AI systems and automated risk assessment is the possibility of perpetuating or reproducing discrimination (Article 21 of the CFREU). For instance, a profiling system based on variables such as nationality, gender and age, is used to calculate the score of short-stay visa applicants wishing to enter the Netherlands and the Schengen area. If the system classifies the applicant as high-risk, authorities will further investigate them, often resulting in delays, and discriminatory bias. In fact, obtaining visas for family members of Dutch, Moroccan and Surinamese citizens has proven to be difficult.[37] It is clear that if these systems are trained with historical data, such as non-automated decisions made by agents in visa procedures to recognize potential irregular immigrants, there is a risk of reproducing the discrimination underlying these human decisions, which are often based on ethnic and racial profiling. Finally, these systems can make mistakes,[38] which can lead to unfair discrimination, culminating in the undue denial of entry or the incorrect risk classification of the migrant.
Considering the various risks outlined and the ongoing evolutionary trend, we believe that the prohibition of automated risk assessment should not have been restricted to cases involving the prediction of the risk of an individual to commit a criminal offense. In fact, although the automated assessment of risks to security, irregular migration or health, in the context of migration, is considered high-risk and must meet certain requirements, it is concerning that such practices are permitted in a context characterised by deep-rooted ethnic, racial and gender discrimination and the heightened vulnerability of migrants.
VII.
Although seemingly less intrusive on fundamental rights, the list of high-risk systems should have included those AI systems aimed at predicting migration trends and border crossings. For example, the European Asylum Support Office (EASO) – since 2022, EUAA[39] – developed the Early Warning and Preparedness System, designed to forecast migration flows into EU territories. This system relies on data sources such as GDELT (information on events by country of origin), Google Trends (weekly online search trends by country of origin), Frontex (monthly detections of irregular border crossings) and internal data on the number of asylum applications and recognition rates in EU Member States. The algorithm seeks to anticipate which events will cause large-scale displacement and estimate the subsequent number of asylum applications in the EU.[40]
On the one hand, by predicting the arrival of migrants, these systems can result in efficient preparation for the arrival of people and allow reallocation of resources according to reception needs. On the other hand, they can facilitate preventive responses to thwart migratory movement through measures to obstruct the access of migrants and asylum seekers to the territory of a State.[41]
Non-entry policies encompass visa checks, carrier sanctions, the establishment of international zones, and maritime interceptions on the high seas, and AI technologies can be central to each of these policies. However, this creates room for the reinforcement of illegal non-refoulement practices, such as through specific maritime interventions aimed at returning migrants and asylum seekers to places where they may fear for their lives or freedom, without giving them the chance to even apply for asylum. AI runs the risk of becoming yet another political tool, used to reinforce old state practices aimed at containing international migration and preventing asylum seekers from reaching their territories.[42] Consequently, these systems must be subject to strict regulation.
VIII.
As previously mentioned, the prohibition of real-time remote biometric identification systems has exceptions. This is permitted when necessary for the search for victims of kidnapping, human trafficking, sexual exploitation, missing persons, the prevention of threats to the life or physical safety of natural persons and terrorist threats, or for the localisation and identification of a person suspected of a criminal offense [Articles 5(1)(h) and (2) of the AI Act]. Given that violations of immigration law are widely treated as criminal offences, and that individuals accessing the EU may be victims of trafficking or have their lives at risk, any of these exceptions could be (mis)used to justify mass biometric surveillance of third-country nationals. The use of these systems requires prior authorisation by an independent judicial or administrative authority [Article 5(3) AI of the Act], which is an important safeguard. however, it is still unclear which authorities may be involved.[43]
Article 14 of the AI Act establishes that high-risk systems must be overseen by at least two natural persons. This aims to guarantee that AI systems are evaluated impartially and responsibly, ensuring the review of automated decisions and avoiding biases and injustices. Yet, in paragraph 5, supervision by at least two natural persons for the purposes of migration, border control or asylum shall not apply when its application is disproportionate – without, however, clearly explaining which criteria or conflicting interests justify such an exception. This supervision, which is particularly important in sensitive areas such as migration and asylum, is essential to ensure the effective protection of fundamental rights. The lack of a clear justification for the exception creates a regulatory vacuum that can be exploited in an abusive manner, allowing the application of sometimes incorrect automated decisions without due human accountability. The lack of adequate supervision in these areas can lead to unfair and discriminatory decisions, which seriously affect the lives of migrants and refugees, without room for contestation (Article 47 of the CFREU).
Finally, high-risk systems must be registered in the EU database (Article 71 of the AI Act). However, in the area of migration and border management there is an exemption from public registration [Article 49(4) AI Act] and from publishing a summary of the AI project developed [Article 59(1)(j) of the AI Act]. While this solution reflects a security-focused approach, it increases the already alarming opacity surrounding the use of AI in migration, preventing public scrutiny and adequate monitoring of the impacts of these systems on the lives of migrants.
IX.
Despite its limitations, the AI Act offers a unique opportunity to advance ethical and inclusive regulation of AI. Therefore, a coordinated effort is essential to ensure that the inevitable technological innovation does not come at the expense of fundamental rights. Stricter measures are recommended, such as the prohibition of intrusive and scientifically unfounded systems, the expansion of high-risk categories, as well as ensuring human oversight and transparency of systems and decisions taken based on them. The impact of these technologies on the lives of migrants requires that the use of AI systems be guided by the protection of fundamental rights, in order to build a truly fair and inclusive European migration system and ensure the non-replication of structural biases. This is a challenge that the EU cannot ignore, especially at a time when the balance between security, technological innovation and fundamental rights has never been more relevant.
[1] Niovi Vavoula, “Artificial Intelligence (AI) at Schengen borders: automated processing, algorithmic profiling and facial recognition in the era of techno-solutionism”, European Journal of Migration and Law (2021), accessed on January 26, 2025, https://ssrn.com/abstract=3950389.
[2] A. Beduschi and M. McAullife, “Artificial intelligence, migration and mobility: implications for policy and practice”, in World Migration Report, eds. M. McAuliffe and A. Triandafyllidou [Geneva: International Organization for Migration (IOM), 2022], accessed January 19, 2025, https://www.publications.iom.int.
[3] Jane Kilpatrick and Chris Jones, A clear and present danger: Missing safeguards on migration and asylum in the EU’s AI Act (2022), 4, accessed January 26, 2025, https://www.statewatch.org.
[4] See Regulation (EU) 2016/679 of the EP and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data.
[5] Lucia Nalbandian, “An eye for an “I”: a critical assessment of artificial intelligence tools in migration and asylum management”, Comparative Migration Studies, v. 10, no. 32 (2022), accessed January 26, 2025, https://comparativemigrationstudies.springer.com.
[6] EUAA – European Union Agency for Asylum, Asylum Report 2023 (2023), 20, accessed January 26, 2025, doi: 10.2847/82162.
[7] See Regulation (EU) 2024/1689 of the EP and of the Council of 13 June 2024 laying down harmonised rules on AI.
[8] European Parliament, “Lei da UE sobre IA: primeira regulamentação de inteligência artificial”, 2023, accessed January 26, 2025, https://www.europarl.europa.eu/topics/pt/article/20230601STO93804/lei-da-ue-sobre-ia-primeira-regulamentacao-de-inteligencia-artificial. European Commission, “AI Act”, accessed January 26, 2025, https://digital-strategy.ec.europa.eu.
[9] European Commission, “Shaping Europe’s digital future”, accessed January 26, 2025, https://commission.europa.eu.
[10] Inga Ulnicane, “Artificial intelligence in the European Union: policy, ethics and regulation”, in The Routledge Handbook of European Integrations, eds. T. Hoerber, I. Cabras and G. Weber (London: Routledge, 2022), 259, doi: 10.4324/9780429262081-19.
[11] Adopted in 1951, available at https://dcjri.ministeriopublico.pt.
[12] Signed on January 31, 1967 and in force since 1967, available at https://dcjri.ministeriopublico.pt.
[13] Machine-based system designed to operate with varying levels of autonomy, and which may exhibit adaptability after deployment and which, for explicit or implicit purposes, and based on input data it receives, infers how to generate results, such as predictions, content, recommendations or decisions that may influence physical or virtual environments [Article 3(1) of the AI Act].
[14] Alessandra Silveira and Maria Inês Costa, “Regulating Artificial Intelligence (AI): on the civilisational choice we are all making”, UNIO – The Official Blog, July 17, 2023, accessed January 26, 2025, https://officialblogofunio.com/2023/07/17/editorial-of-july-2023/.
[15] Luca Bertuzzi, “AI Act: EU Parliament’s discussion heat up over facial recognition, scope”, Euractiv, 2022, accessed January 20, 2025, https://www.euractiv.com. Luca Bertuzzi, “AI Act: EU policymakers nail down rules on AI models, butt heads on law enforcement”, Euractiv, 2023, accessed January 20, 2025, https://www.euractiv.com.
[16] See Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on Visas (Visa Code).
[17] Evelien Brouwer, “EU’s AI Act and migration control. Shortcomings in safeguarding fundamental rights”, VerfBlog, 2024, accessed January 26, 2025, https://dx.doi.org/10.59704/a4de76df20e0de5a.
[18] Paul Voigt and Nils Hullen, The EU AI Act: Answers to frequently asked questions (Berlin. Springer, 2024), 42, doi: 10.1007/978-3-662-70201-7.
[19] Voigt and Hullen, The EU AI Act, 42.
[20] Brouwer, “EU’s AI Act”.
[21] Voigt and Hullen, The EU AI Act, 38.
[22] Frontex, Artificial Intelligence-based capabilities for the European border and coast guard: final report (2021), 28-29, accessed January 26, 2025, https://www.frontex.europa.eu.
[23] Technologies by which a traveller intentionally attempts to be misidentified or misclassified by the biometric recognition system.
[24] Judgment CJEU Ligue des droits humains, 21 June 2022, Case C-817/19.
[25] See Directive (EU) 2016/681 of the European Parliament and of the Council of 27 April 2016 on the use of passenger name record (PNR) data for the prevention, detection, investigation and prosecution of terrorist offences and serious crime.
[26] Brouwer, “The EU’s AI Act”.
[27] See Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of borders and visa and Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing a framework for interoperability between EU information systems in the field of police and judicial cooperation, asylum and migration.
[28] European Commission, “Overview of information management in the area of freedom, security and justice”, October 20, 2010, accessed January 26, 2025, https://eur-lex.europa.eu.
[29] Yiran Yang et al., “Automated Decision-making and Artificial Intelligence at European Borders and Their Risks for Human Rights”, SSRN, Working Draft (2024): 15, doi: 10.2139/ssrn.4790619.
[30] This system is proof of the paradigm shift towards the aforementioned techno-solutionism, placing trust in technologies as a modern means of responding to the emergence of new forms of security threats, illegal immigration patterns and epidemic risks (Recital 29 of the ETIAS Regulation).
[31] ETIAS, “ETIAS will launch 6 months after EES rollout, official website updates”, 2024, accessed January 26, 2025, https://etias.com.
[32] See Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing a European Travel Information and Authorisation System (ETIAS).
[33] EU-LISA, Artificial intelligence in the operational management of large-scale IT systems. Research and technology monitoring report: perspectives for eu-LISA (Brussels: EU Publications Office, 2024), 30.
[34] Niovi Vavoula, “Regulating AI at Europe’s border: where the AI Act falls short”, Verfassungsblog, December 13, 2024, accessed January 26, 2025, https://verfassungsblog.de/regulating-ai-at-europes-borders/.
[35] Yang et al., “Automated”, 20.
[36] Evelien Brouwer, “Schengen and the Administration of Exclusion: Legal Remedies Caught in between Entry Bans, Risk Assessment and Artificial Intelligence European”, Journal of Migration and Law, v. 23 (2024): 485-507, doi: 10.1163/15718166-12340115.
[37] Nalinee Maleeyakul et. al, “Ethnic Profiling”, Lighthouse Reports, 2023, accessed January 26, 2025, https://www.lighthousereports.com.
[38] P. Møhl, “Biometric technologies, data and the sensory work of border control”, Ethnos, v. 87, no. 2 (2022): 241-256, doi: 10.1080/00141844.2019.1696858.
[39] See Regulation (EU) 2021/2303 of the European Parliament and of the Council of 15 December 2021 on the European Union Agency for Asylum and repealing Regulation (EU) No 439/2010.
[40] Derya Ozkul, Automating Immigration and Asylum: The Uses of New Technologies in Migration and Asylum Governance in Europe (Oxford: Refugee Studies Centre, University of Oxford, 2023), 15.
[41] Brouwer, “EU’s AI Act”.
[42] Ana Beduschi, “International migration management in the age of artificial intelligence”, Migration Studies, v. 9, no. 3 (2020): 576-596, doi: 10.1093/migration/mnaa003.
[43] Vavoula, “Regulating”.
Picture credit: by Markus Spiske on pexels.com.