The current events in the US, especially the takeover of executive branches by the non-elected private citizen Elon Musk, rightfully described as an “AI Coup”, left legal scholars and other constitutional experts in a state of shocked disbelief. From a European perspective, many consider such a development unthinkable. However, we should not be too certain about that. The current decision of the EU Commission to carry out a “de-regulatory turn” illustrates how strongly a technical innovation narrative – one that has contributed to the success of individuals like Musk and their corporate conglomerates – is catching on globally: AI means innovation, innovation is desirable without restriction, and regulation hinders it. In this text, I argue that the Commission’s decisions are precisely the wrong response to global developments and that the much-criticised bureaucracy fulfils important constitutional purposes.
The AI innovation narrative
Part of the innovation narrative is what I call the epistemic capture of AI – the dominance of certain expert knowledge in regulatory debates. This specialised expertise carries particular weight when only a limited number of companies are capable of developing the very products subject to regulation—such as large foundation models that underpin popular generative AI applications.
There is no doubt that there are many desirable and useful applications of AI that promote economic growth, efficiency, effectiveness, research, and other areas of society and will continue to do so in the future. Public medical research, quality assurance in manufacturing companies, environmental protection, and numerous other examples show the potential of AI for improvement and development that can benefit the common good. However, it is not these examples that dominate public discourse and policy debates, but rather the credo “AI first, concerns second – or not at all”.
Above all, AI promises many of the things that capitalist systems equate with growth: cost reduction, increased effectiveness and efficiency. But that’s not all: AI promises to make the world more humane, fairer and happier. In this way, AI provides its own justification: if supposed overregulation or bureaucratic structures are to disappear, this can be realised through AI.
However, a brief look behind the scenes shows that the success story of many popular AI developments is not merely a product of technical advancements in GPU capacities or the ingenuity of individual tech entrepreneurs, but is based on problematic ideology, structural violations of the law, and the exploitation of human labour from the Global South. Nevertheless, the dominant narrative portrays AI as a purely technical, salutary future development, dismissing some of its deeply problematic socio-technical implications.
Furthermore, the nebulous goal of “innovation” trumps everything else because it is presented as an all-encompassing good. In the AI power landscape, this consideration is also more easily accepted because individual harms are difficult to make visible, hard to prove, and even more challenging to prevent. The list of potential harms is endless, yet public discussion reduces them to abstract concerns. The absence of regulatory requirements creates a self-perpetuating cycle: if no legal norms addressing AI-based discrimination exist, there are no court decisions to establish accountability.
This logic reaches its extreme by the objective of general artificial intelligence (GAI), widely regarded as AI’s ultimate goal. GAI renders the prospect of control through legal regulation or engineering standards unfeasible, thereby constituting a dangerous goal.
Is regulation hindering innovation?
These supposedly innovation‑driven frames in discourse aim to decisively shape regulatory endeavours – and with success. The current Commission has not only distanced itself from the European Green Deal and followed calls from the member states, but also decided to give up the AI Liability Directive (AILD) and other legislative acts. This includes the proposal for a directive on implementing the principle of equal treatment between persons irrespective of religion or belief, disability, age or sexual orientation, as well as a regulation regarding public access to European Parliament, Council and Commission documents, and the regulation on Privacy and Electronic Communications.
We have argued here that it is in Big Tech’s economic self-interest to actively influence such regulation, particularly when acting as first movers – an approach that allows big industry in particular to keep small competitors at bay. For a while, this strategy included advocating for regulation or even calling for a moratorium on AI development. Google argues that “Hampering Google’s AI tools risks holding back American innovation at a critical moment”, while the influential venture capital firm a16z has gone so far as to call for a ‘right to learn’ for machine learning models, including the abolition of copyright law (“Copyright law should not be co-opted to imply that machines should be prevented from using data—the foundation of AI—to learn in the same way as people.”). These and numerous other examples reflect legitimate private and corporate interests. The problem, however, lies in their largely uncritical acceptance due to perceived superior technical expertise of actors from the private industry. For instance, last-minute amendments to the AI Act introduced far-reaching exceptions for general-purpose AI models to avoid stifling innovation. However, for the large models, these only amount to up to 1.34% of the total development costs. All in all, the antagonistic juxtaposition of regulation and innovation is an oversimplification and frankly, just the wrong question.
Moreover, the focus on an “AI race” between China, the USA, and Europe feeds the oversimplified innovation narrative. This narrative misleadingly suggests that there is a goal that can be achieved first by one party based on objectively measurable criteria (such as speed, in keeping with the race metaphor). This framing also massively narrows the scope of assessment, focusing on the performance of certain machine-learning systems that require considerable amounts of data and computing power for predictions or generative outputs. In contrast, socio-technical considerations, sustainability aspects, legal compliance, and alignment with social values are not so easy to quantify and therefore do not play a significant role in the race narrative. In addition, a European approach should resist direct comparisons with a communist dictatorship or a country such as the USA in its current political climate. Only sustainable technical development will be accepted in the long term; an orientation towards fundamental rights, a free but regulated economic order and democratic processes are good indications of this. A complete lack of regulation is no guarantee for innovation, especially if the basic principles of the rule of law are eroded; rather, it can harm economic growth. Anu Bradford has convincingly argued that the dominance of American technology companies is shaped not by the absence of regulation but by factors such as the fragmentation of the EU market, capital markets dynamics, bankruptcy regulations, cultural risk aversion, and the lack of a proactive immigration policy. .
Is the Commission following the US agenda?
The Commission announced its final work programme for 2025 on 11 February, listing several legislative acts slated for withdrawal, including the AILD. This decision, coming on the heels of JD Vance’s anti-regulation rant at the AI Action Summit, is troubling. Additionally, the specific decision to withdraw the AI Liability Directive is deeply concerning for several reasons.
First, the aim of the AILD was to establish legal protection for individuals who suffer AI-related harms. In contrast to the AI Regulation, which focuses on market entry requirements for AI systems, the AILD are secondary liability provisions designed to standardise ex-post regulations in the event of proven damage.
Secondly, the Commission is thereby squandering considerable potential to realise the politically desired counter-proposal: to create a future-oriented digital single market. Without this directive, liability rules for damages caused by AI that do not fall under the Product Liability Directive will remain fragmented across the 27 Member States, meaning that there is no harmonisation. It is not clear how this will reduce red tape.
Thirdly, the decision undermines the European Parliament, which had already debated the directive.
Fourthly, there are considerable regulatory gaps, as the (new) Product Liability Directive does not cover damages due to discrimination, violation of personal rights and purely financial damage.
It sounds as if the Commission wants to jump on the deregulatory bandwagon, following a trend on the other side of the Atlantic. This is a step in the wrong direction. Even if there is legitimate criticism of EU digital legislation, including concerns over bureaucratic inefficiencies and significant regulatory hurdles, especially for SMEs: the regulation of digital technologies negotiated in democratic discourse is the only way to maintain democratic, constitutional and fundamental rights guarantees. Despite justified criticism, the AI Act is a step in the right direction: It is important that the AI Act prohibits certain practices, such as the scraping of facial images on the internet (article 5 (1e)), emotion recognition systems in the workplace (Article 5 (1f)) or systems for scoring people’s social behaviour (Article 5 (1c)); it is important to have established rules prescribe certain quality parameters for high-risk systems and it is equally important that we continue to further discuss these issues and existing regulatory loopholes.
AI governance is a matter of the rule of law, as AI is inherently about power; therefore, regulating AI is about regulating power. The power dimensions of AI are manifold: the centralisation of infrastructure, AI models and data appropriation in the hands of a few Big tech players, the “black boxing” of AI systems, where people cannot understand or explain the process leading to a decision that affects them, or the individual and the very systemic harms that AI systems can cause without providers taking responsibility, especially the ability to extract and use large amounts of data for predictions. Reducing bureaucratic complexity is one thing. However, a greater priority lies in clarifying the relationships between the different legal acts, navigating and strengthening rule-of-law-centred enforcement, and supporting civil society in fostering critical discussions about AI and needed regulation. Of course, secondary liability requirements such as the AILD do not solve the fundamental social problems of technology development, inequality, and power issues on their own. However, they do help to establish responsibility and provide citizens with a tool that must, of course, be able to be effectively enforced.
Conclusion and outlook: Bureaucracy will save us?!
The Commission has now launched a 200€ billion investment in AI via the InvestAI initiative. At first glance, this sounds like a welcome decision to strengthen the ‘European approach’. However, the decisive factor will be which AI is promoted; the press release mainly referred to “AI gigafactories” across the EU. Yet, citizens will only benefit from the high promises, among other things, if AI-based damages are also addressed and compensated.
Policymakers should not make the mistake of following the US’s lead, which would amount to little more than “sanewashing”. A rule-of-law-based and human-centred ‘European approach’ has much to offer. In the end, this approach may even constitute the ‘better business model’ in terms of American market logic.
Ultimately, the much-maligned bureaucracy should be viewed from a different angle: It is not only a hindrance but also a safety mechanism. After all, bureaucracy is a system of order and an impersonal structure of positions. It establishes responsibilities and procedures—aspects that seem more important than ever in today’s world.