the European Union’s struggle for electoral integrity – Official Blog of UNIO – Go Health Pro

Beatriz Magalhães Sousa (master’s student in European Union Law at the School of Law of University of Minho)

Modern democracies face, nowadays, highly sophisticated and subtle threats. The electoral interference by third countries, while known to be a practice, has been thrown into the spotlight after the Romanian elections’ debacle – the Constitutional Court, doubting the integrity of the results (which gave the victory to far-right candidate, Calin Georgescu), opted (ex officio)[1] for the annulment of the election. This decision underlines not only the growing suspicion of Russia’s meddling in European politics, but also the dangers that digital technologies and the impoverishment of information constitute for the electoral process – according to the Court, the employment of Artificial Intelligence (AI), automated systems, and coordinated information integrity campaigns play a big part in contemporary elections.[2]

With the elections annulled, Romanian voters rushed to the polls (for the second time in six months) on May 4th, 2025, with the far-right supported candidate – now George Simion, after Georgescu was barred from campaigning for a second time – winning the first round of the rerun.[3] In an attempt to suppress the risks that plagued the past elections, Romania’s institutions created a campaign to combat illegal online content (conducted by the Education Ministry in coordination with the National Audiovisual Council) and encouraged citizens to report any content that constitutes disinformation.[4] These efforts, while commendable seem to have fallen short of the mark with Simion’s win on May 18th being all but certain.

Russia’s interference is silent – its hybrid attacks include anything from bringing down of infrastructures and espionage to cyber-attacks and disinformation campaigns. Countries like Britain and the Netherlands have cast their concern about the increasing nature of the phenomenon.[5] Outside Europe, Canada’s Intelligence Agency has warned, in connection with the general elections on April 28th, that India, China and Pakistan are also using AI tools to interfere in the democratic process, taking inspiration from Russia’s playbook.[6] 

The use of disinformation as an instrument of policy has been part of Russia’s strategy since the days of the Cold War, but the rise of the digital world has given it a power, a reach, an all-encompassing nature that is difficult to counterattack. Doppelganger Operation, first reported in 2022, is an excellent example of this new era of disinformation: it was a “multi-faceted online information operation” that used fake clones of legitimate media and government websites and the creation of anti-Ukrainian and pro-Russian web pages, spread through fake profiles on social media platforms like Facebook and X.[7]

Another phenomenon that stands out for making clear the new layers that disinformation has gained over the last decade is the proliferation of deepfake[8] images and audios on social media and in media outlets: this type of technology, which initially raised concern because of its pornographic intent (manipulated images of women in sexually explicit behaviour have been accumulating in recent years),[9] has recently taken on more political nuances, with the manipulation of the image of political figures gaining serious momentum: in 2022, a video of Ukrainian president Volodymir Zelenskyy was planted on social media and in media outlets, implying that he was encouraging his compatriots to surrender to Russia; [10] in 2023, an audio of Slovak candidate Michal Šimečka talking about electoral fraud with a journalist may have cost him victory in the parliamentary elections,[11] and in 2024, the US presidential elections were polluted by an audio file that sounded like then-president Joe Biden encouraging voters to save their votes for November by abstaining from voting in the primary elections. [12]

With all these facts in mind, and after the biggest election year in humanity’s history, with citizens flocking to the polls in more than 70 countries, [13] it suffices to say that public actors are more informed than ever, yet they are still struggling to understand how to combat powerful, well-oiled propaganda and disinformation machines that continue to be built by other giant political forces with the aim of undermining and bringing down entire structures based on democracy. Combined with the fact that it is almost impossible to link the content to these political actors and that it is difficult to stop the spread of information in time, online disinformation becomes one of the biggest enemies of democracies.

The European Commission, in its 2018 Communication “Tackling online desinformation: a European approach”, defines disinformation as “verifiably false or misleading information that is created, presented and disseminated for economic gain or to intentionally deceive the public, and may cause public harm.” Its impact is of difficult measure, but looking at the current panorama, it is almost impossible to discard it. The reality is that, nowadays, people access information mainly through social media, taking at face-value almost everything they come across, unknowingly buying into lies and acting as fuel to a digital fire that threatens to burn down the very foundations of informed public debate and democratic participation.

The question that arises when it comes to disinformation is: what can the European Union do to fight against it? In fact, this question is more difficult to resolve than it might seem. The problem lies in balancing the tightrope between combating disinformation and protecting the fundamental right to freedom of expression and information [(Article 11 of Charter of Fundamental Rights of the EU (CFREU)] (in the case of deepfakes we can even talk about the freedom of the arts – Article 13 of CFREU – that arises from the latter). There is no doubt that freedom of expression is the pillar for creating a solid democratic framework and, as such, it must be protected – being recognised in most constitutions, it is a freedom that benefits from a multilevel protection[14] – but public actors, while protecting society from erroneous information, must not interfere harshly.

This struggle, like most fundamental rights, is multifaceted and needs to be analysed in depth, but it can essentially be summed up as follows: if freedom of expression is based on the possibility of creating opinions, sharing information and ideas, it can be argued that lies and falsehoods can also be protected by this principle. The CFREU in no way mentions the requirement that the information transmitted be categorically true. However, it is important to note that Article 10 of the European Convention on Human Rights (ECHR) acts as a limiting criterion for Article 11 CFREU. Thus, if we consider that a particular piece of content – be it fake news or the use of AI, in the form of deepfake images/audio, for example – jeopardises, in addition to others, the interests of national security or public integrity, in other words, if we consider that it jeopardises the democratic values on which our system is based, then it can no longer be protected by freedom of expression.

What is obvious is that this limitation is subject to certain requirements: (i) the restriction of freedom is legally recognised; (ii) the restriction of freedom has a legitimate goal and (iii) the restriction of freedom is proportional (Handyside v. United Kingdom). If we take the example of deepfakes, this means that images with an exclusively parodic intent will most likely be protected by freedom of expression and freedom of the arts. The same can be said of deepfakes that aim to teach or warn about something: videos such as the one dubbed by Jordan Peele portraying Barack Obama, for example, were created with the aim of drawing the public’s attention to the danger of this technology.[15] On the other hand, images, videos and audio manipulated for discriminatory, slanderous and violent purposes require a different reaction and approach.

When the conversation turns to AI as a tool for disinformation, it is necessary to bear in mind the AI Act. As well as trying to deal with all the problems already mentioned – the state of democracy and the threat that technology may pose – the diploma is mindful of the necessary balance between fundamental rights and tries to find ways not to slow down innovation. It creates a spectrum, which categorises the AI system based on the danger it poses – (i) unacceptable risk; (ii) high risk; (iii) limited risk and (iv) minimal risk. Deepfakes, for example, are generally classified as limited risk AI, which relates to the risk that the lack of transparency in their use can entail. For this reason, Article 50(3) (in conjunction with Recital 134), the only Article other than Article 3 that directly mentions deepfakes, creates an obligation of transparency – anyone who uses deepfake technology must clearly mark it as such, making its artificiality known to anyone who comes into contact with that type of content. It is important to note that the AI Act makes it clear that the transparency obligation created by Article 50 is in no way an attempt to interfere with freedom of expression or information or the freedom of the arts and sciences, which will be protected as long as it does not jeopardise the rights and liberties of third parties.[16]

This Regulation, while being a great step in the right direction, barely scratches the surface of the problem not only when it comes to deepfakes, but also to other systems: the risk spectrum is applied to the system and not to the content – taking hold of deepfake originating systems, both can create content that does not have the impact that has been discussed, while at the same time creating material that jeopardises democracy.

If a political deepfake has the potential to manipulate the decisions of the electorate and, consequently, the results, it may fall within the scope of Annex 3 [paragraph 8(b) states that “AI systems intended to be used for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda”], however, because this manipulation does not concern the system per se, but rather how it is used, there is space for the system to be defensible, and, therefore, further clarification on how the criterion is applied may be needed.

The changing forms of disinformation, its use as a tool of state manoeuvre by political forces, the sophistication of AI technology, combined with the volatile nature of information in the digital environment, call on the EU to find new and original ways to guarantee the truthfulness of public debate while protecting freedom of expression, and to create a stronger framework that clarifies more clearly which AI can be classified as high risk – especially when it poses a direct threat to the democratic process. It is pressing, in the meantime, to focus on an educational response, rather than a merely legal one: the population must be able to protect itself from false information, and this is only possible by promoting digital literacy – an educated and well-informed society cannot be made a puppet in the hands of those who seek to manipulate, undermine and destroy its democratic foundations.


[1] The Romanian Constitutional Court had initially validated the results of the elections. In reopening the case it acted ex officio – a decision that, although not common practice, is grounded in its constitutional authority under Article 146(f) of the Romanian Constitution. This was prompted by the declassification of intelligence reports “outlining concerns about cyber activities by state and non-state actors, the use of digital technologies, and information campaigns that may have undermined the election’s integrity”. See International Foundation for Electoral Systems (IFES), “The Romanian 2024 election annulment: addressing emerging threats to electoral integrity”, 20 December 2024, available at: https://www.ifes.org/publications/romanian-2024-election-annulment-addressing-emerging-threats-electoral-integrity.

[2] International Foundation for Electoral Systems (IFES), “The Romanian 2024 election annulment: addressing emerging threats to electoral integrity”.

[3] See Reuters, “Romanian hard-right leader George Simion wins first round of election rerun”, 5 May 2025, available at: https://www.reuters.com/world/europe/romanians-vote-presidential-test-trump-style-nationalism-2025-05-03/.

[4] See Romania-Insider.com, “Romania’s education ministry announces steps to combat pseudoscience, manipulation”, 12 March2025, available at: https://www.romania-insider.com/ed-min-ro-pseudoscience-measures-mar-2025.

[5] See Reuters, “Russia is ramping up hybrid attacks against Europe, Dutch intelligence says”, 22 April 2025, available at: https://www.reuters.com/world/europe/russia-is-upping-hybrid-attacks-against-europe-dutch-intelligence-says-2025-04-22/.

[6] See Aljazeera, “Canada warns of election threats from China, Russia, India and Pakistan”, 25 March 2025, available at: https://www.aljazeera.com/news/2025/3/25/canada-warns-of-election-threats-from-china-russia-india-and-pakistan.

[7] See EU DisinfoLab, “What is the Doppelganger Operation? List of Resources”, last updated on 9 April 2025, available at: https://www.disinfo.eu/doppelganger-operation/.

[8] The term “deepfake” has recently been clarified in the Artificial Intelligence Act (Regulation (EU) 2024/1689) (henceforth, the AI Act) as “AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and would falsely appear to a person to be authentic or truthful” [Article 3(60)].

[9] See Shanti Das, “Would love to see her faked: the dark world of sexual deepfakes – and the women fighting back”, The Observer, 12 January 2025, available at: https://www.theguardian.com/technology/2025/jan/12/would-love-to-see-her-faked-the-dark-world-of-sexual-deepfakes-and-the-women-fighting-back

[10] See NPR, “Deepfake video of Zelenskyy could be the ‘tip of iceberg’ in info war, experts warn”, 16 March 2022, available at: https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia.

[11] See Misinformation Review, “Beyond the deepfake hype: AI, democracy, and “the Slovak Case”, Harvard Kennedy School, 22 August 2024, available at: https://misinforeview.hks.harvard.edu/article/beyond-the-deepfake-hype-ai-democracy-and-the-slovak-case/.

[12] See NPR, “How AI deepfakes polluted elections in 2024”, 21 December 2024, available at: https://www.npr.org/2024/12/21/nx-s1-5220301/deepfakes-memes-artificial-intelligence-elections.

[13] See UNDP, “A ‘Super Year’ for elections”, available at: https://www.undp.org/super-year-elections.

[14] See Vanessa Nunes Monteiro, “Duelo de titãs: liberdade de expressão vs. discurso de ódio (o tratamento pelo Tribunal Europeu dos Direitos Humanos”, Revista Minerva Universitária, 31 October 2022, available at:  https://www.revistaminerva.pt/duelo-de-titas-liberdade-de-expressao-vs-discurso-de-odio-o-tratamento-pelo-tribunal-europeu-dos-direitos-humanos/.

[15] See Aja RomanoVox, “Jordan Peele’s simulated Obama PSA is a double-edged warning against fake news”, Vox, 27 January 2025, available at: https://www.vox.com/2018/4/18/17252410/jordan-peele-obama-deepfake-buzzfeed.

[16] See Recital 134 of the AI Act.


Picture credit: by Edmond Dantès on pexels.com.

Leave a Comment