The EU’s Artificial Intelligence Act came into force in August last year. Yet as Gloria González Fuster explains, there remain more questions than answers about how the EU’s approach to AI will function in practice.
If there was a race to regulate Artificial Intelligence (AI), then the European Union won it. Maybe there was no race, or only in the political agendas of the EU’s policymakers. Nevertheless, an urgency to regulate AI was felt in Brussels as early as 2019. It was then that the still to be elected President of the European Commission Ursula von der Leyen announced she would put forward legislation on AI in her first 100 days in office.
The EU eventually pushed hard for the adoption of rules in the area and managed to get pioneering horizontal regulation in place before anyone else. In 2025, however, we are still seeking clarity about what these rules mean and where all this is taking us exactly.
A turning point in AI governance
It was in 2021 when the European Commission published a legislative proposal to regulate AI. The proposal heralded a turning point in AI governance, driving it decidedly towards the legal realm.
Until then, the discourse around AI had been marked by references to ethics and the need for some “ethical AI” of vague contours. With its proposal, the European Commission embraced the need for hard law. It also acknowledged that if the EU wished to promote AI – which it undoubtedly did and still does – it had to find a way to square that vigorous endorsement with the fact that AI can have a negative impact on fundamental rights, such as privacy, data protection, non-discrimination and freedom of expression, to name a few.
The AI Act’s relationship with fundamental rights is often misunderstood. Looking at the broader picture of the EU’s take on AI, the AI Act stands out as the instrument that has as its prime goal the anchoring of AI in EU values, framing and constraining its deployment with respect to the EU’s fundamental rights.
Other European measures abound that have other goals, centred on supporting AI – from data spaces to AI factories. At the same time, it is true that the AI Act itself is far from being exclusively concerned with fundamental rights. Its explicit aim is to improve the functioning of the internal market and promote the uptake of “human-centric and trustworthy” AI, while ensuring a high level of protection of “health, safety, fundamental rights”, against AI’s harmful effects and “supporting innovation”.
That is quite a goal, bringing together allegedly disparate elements, somehow moving in almost opposite directions. This heterogeneity is mirrored all over the AI Act’s provisions: here, product safety shoulders with the rule of law; very diverse actors including international standardisation bodies and national human rights organisations are given roles; fundamental rights impact assessments are brought in for some (limited) scenarios, waving at the EU Charter; and then “regulatory sandboxes” designed to help innovators innovate are also put forward, gently bowing before the innovation imperative.
Deferred clarity
The AI Act’s risk-based approach exemplifies its pragmatism. The premise is that some AI systems involve risks that are too high for a democratic society to afford and should thus be simply banned – except when they are prohibited but nevertheless allowed. This is the case, basically, for the use of facial recognition in public spaces by the police, probably the Act’s less subtle Trojan horse.
Somehow below these systems – in what has now become a popular pyramid-like image – and constituting the core of what the AI Act regulates in most detail, there are the AI systems that are also regarded as “high-risk”. This is a kind of risk that the EU legislator considers we should actually take, provided that deployers and developers comply with some rules.
The pragmatism and composite nature of the AIA might appear distracting, if not disturbing. It is not, as such, a major novelty in EU law, which is relatively used to protecting fundamental rights with one hand while seemingly doing the opposite with the other (EU data protection law was built on a similar rock, and it is doing well). What really sets the AI Act apart from other pieces of EU digital law is rather the way in which, despite regulating a novel field – or precisely because of that – it defers clarity on definitions, rules and solutions to a later point in time. Sometimes deliberately, sometimes not.
A marathon after all
When in the spring of 2024 the EU legislator reached an agreement on the final text of the AI Act, almost everyone in Brussels seemed to be very proud of themselves. It was certainly an achievement for the Spanish presidency, which had tried hard to portray itself as digitally inclined, and von der Leyen could be glad to have won her own game of being first at regulating AI. Negotiators were still patting themselves on their own backs when some of the cracks in the negotiated result started to become visible.
The most visible cracks are at this stage – that is, when part of the AI Act is already applicable – the many pending open questions about how to interpret some of its provisions, including the very definition of AI systems, or what the “AI literacy” is that many should have already acquired.
Real solutions for these gaps in legal certainty might probably be found when there is clarity about all authorities responsible for the AI Act’s implementation and enforcement, which is still very much a work in progress. For now, patches are surfacing in rather sketchy ways, which prompts the question of what the point was in rushing to regulate AI with hard law, if such hard law is then to be applied with sloppy soft law initiatives.
Much of the pressure is now on the European Commission. Partially disguised as an AI Office, the Commission managed to grab significant power under the AI Act, and it is not yet fully clear whether such eagerness constituted over-commitment or not. Under the AI Act, the European Commission not only has the power to adopt significant delegated acts but also, notably, the responsibility to issue meaningful guidance that should help everyone understand their rights and obligations, ideally on time.
The publication of its Guidelines on prohibited AI practices after such practices were actually prohibited, coupled with the fact that the European Commission stresses that the Guidelines might be unilaterally amended or even withdrawn at any time, is not a reassuring sign. Hopefully, the pace will improve, so the EU can also win the game of applying its own rushed ideas quickly and effectively, showing it has understood that regulating AI was always going to be a marathon.
Note: This article gives the views of the author, not the position of EUROPP – European Politics and Policy or the London School of Economics. Featured image credit: Alexandros Michailidis / Shutterstock.com