Alessandra Silveira (Editor of this official blog, Academic Coordinator of Jean Monnet Centre of Excellence “Digital Citizenship & Technological Sustainability” - CitDig, Erasmus+)
There is no doubt that European Union (EU) law is committed to a certain rebalancing of powers in the digital ecosystem. And why is that? Because today there is a clear imbalance of power in favour of digital service providers, which requires a strengthening of the position of users in their relationship with providers. The Internet has become a space made up of platforms, where unilaterally established and non-transparent business models are developed. This attempt to rebalance power in the digital ecosystem is an exercise in social justice that only the EU can foster. And this trend is particularly noticeable in the field of personal data protection.
The emergence of a business model based on data – and profiling based on inferred data – reveals the imbalance of power between users and platforms. This has led some authors to recognise the quasi-public powers exercised by technology companies on the Internet: they regulate, enforce and resolve conflicts of interest, acting in an uncontrolled way that we would not even allow public authorities to do in the context of the rule of law. But the problem must be contextualised: what is personal data?
In EU law, the term “personal data” means any information relating to a natural person that allows them to be identified, even if this does not occur directly, but by association of concepts, characteristics or content. In other words, it is information that, because it is associated with a person, enables them to be identified – it is not necessarily private or intimate information, it suffices that it is personal. And this personal data is protected when it is subject to any operation or processing – collection, recording, organisation, structuring, storage, adaptation or alteration, restriction, erasure, destruction, etc. – regardless of the technology used.[1]
When the General Data Protection Regulation (GDPR) began to be drawn up, the data subject’s consent was the crux of the personal data issue. But soon it became clear that the data subject needed to be protected, even if they had given their consent for their data to be processed. And the big problem today is no longer the data provided by the data subject with their consent, but the data inferred from the Internet user’s digital footprint – and of which the data subject is not even aware. That is why the European Data Protection Board (EDPB) will soon be issuing guidelines for companies on extracting large data sets from the Internet and using personal data in Artificial Intelligence (AI) models.[2]
As we have highlighted in this blog,[3] profiling is often used to make predictions about individuals. It involves collecting information about a person and assessing their characteristics or patterns of behaviour in order to place them in a particular category or group and to draw on that inference or prediction – whether of their ability to perform a task, their interest or presumed behaviour, etc. To this extent, such automated inferences demand protection as inferred personal data, since they also make it possible to identify someone by association of concepts, characteristics, or contents. The crux of the matter is that people are increasingly losing control over such automated inferences and how they are perceived and evaluated by others.
Thus, the most relevant question on the matter is the following: what effective rights and guarantees do individuals have to control how they are evaluated by others – and, eventually, to challenge the operation that results in automated inferences and whose justification appears not to be reasonable?[4]
The Court of Justice of the European Union (CJEU) has been called upon by the courts of the Member States, via preliminary rulings (Article 267 TFEU), to assess the existence of legal solutions to challenge operations that result in automated inferences. In other words, the CJEU has been called upon to determine whether the GDPR adequately protects the data inferred, in the light of the fundamental right to the protection of personal data laid down in Article 8 of the Charter of Fundamental Rights of the European Union (CFREU), at the risk of violating the data subject’s insusceptibility to instrumentalisation – ultimately, human dignity itself. Two recent cases are worth highlighting.
I
The first of these would be the case of Maximillian Schrems v. Meta Platforms (C-446/21), judgment of 4 October 2024. The facts in the main case are easily explained, but they require a preliminary technical contextualisation. The business model of the online social network Facebook is based on financing through online advertising, which is tailored to the individual users of the social network according, inter alia, to their consumer attitudes, interests and personal situation. That advertising is made possible in technical terms by the automated production of detailed profiles in respect of the network users and the users of the online services offered at the level of the Meta group. It collects data from users and their devices relating to user activity, both on and off the social network, and cross-references this data with the Facebook accounts of the users concerned.
The data relating to activities outside the social network originate, first, from visits to third-party webpages and apps, which are linked to Facebook through programming interfaces and, second, from the use of other online services belonging to the Meta group, including Instagram and WhatsApp. Facebook’s social plug-ins are embedded by third-party website operators into their pages. The most widely used is Facebook’s “like” button. But it is not necessary that the user has clicked on the “like” button, since merely loading a page with such a plug-in is sufficient for those data to be transmitted to Meta Platforms.
It is apparent from the order for reference that plug-ins are also found on the websites of political parties and the websites targeted at homosexual users visited by Mr. Schrems. Using those plug-ins, Meta Platforms has been able to follow Mr. Schrems’ Internet behaviour, which triggered the collection of certain sensitive personal data. For example, Meta Platforms processes data relating to Mr. Schrems’ political beliefs and sexual orientation. Thus, Mr. Schrems received advertising concerning an Austrian politician, which was based on the analysis done by Meta Platforms, indicating that he had points in common with other users who had liked that politician. Mr. Schrems also regularly received advertising targeting homosexual persons and invitations to related events, although he had never previously shown any interest in those events and did not know where they were to be held. That advertising and those invitations were not based directly on the sexual orientation of the applicant in the main proceedings and his friends, but rather on an analysis of their interests, in this case on the fact that friends of Mr. Schrems liked a product.
Before the national court, Mr. Schrems requested that Meta Platforms be ordered to cease processing his personal data for the purpose of personalised advertising and using those data derived from visits to third-party websites obtained by third parties. In this context, the referring court essentially asks the CJEU i) whether Article 5(1)(c) GDPR must be interpreted as meaning that the principle of data minimisation provided for therein precludes any personal data obtained by a controller, such as the operator of an online social network platform, from the data subject or third parties and collected either on or outside that platform, from being aggregated, analysed and processed for the purposes of targeted advertising without restriction as to time and without distinction as to type of data; and ii) whether Article 9(2)(e) GDPR must be interpreted as meaning that the fact that a person has made a statement about his or her sexual orientation on the occasion of a panel discussion authorises the operator of an online social network platform to process other data relating to that person’s sexual orientation, obtained, as the case may be, outside that platform using partner third-party websites and apps, with a view to aggregating and analysing those data, in order to offer that person personalised advertising.
The CJEU noted that, in the light of the principle of data minimisation provided for in Article 5(1)(c) GDPR, the controller may not engage in the collection of personal data in a generalised and indiscriminate manner and must refrain from collecting data which are not strictly necessary having regard to the purpose of the processing. The Court also noted that Article 25(2) GDPR requires the controller to implement the appropriate measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed. According to the wording of that provision, that requirement applies, inter alia, to the amount of personal data collected, the extent of their processing and the period of their storage.
The Court considered that Meta Platforms collects the personal data of Facebook users, including Mr. Schrems, concerning those users’ activities both on and outside that social network, including in particular data relating to online platform visits and third-party websites and apps, and also follows users’ navigation patterns on those sites through the use of social plug-ins and pixels embedded in the relevant websites. Such processing is particularly extensive since it relates to potentially unlimited data and has a significant impact on the user, a large part – if not almost all – of whose online activities are monitored, which may give rise to the feeling of continuous surveillance and does not appear to be reasonably justified in the light of the objective consisting in enabling the dissemination of targeted advertising. In any event, the indiscriminate use of all the personal data held by a social network platform for advertising purposes, irrespective of the level of sensitivity of the data, does not appear to be a proportionate interference with the rights guaranteed by the GDPR to users of that platform.
Moreover, if the consequence of the fact that the data subject has manifestly made public data concerning his or her sexual orientation is that those data may be processed, by way of derogation from the prohibition laid down in Article 9(1) GDPR, that fact alone does not, contrary to the contentions of Meta Platforms, authorise the processing of other personal data relating to that data subject’s sexual orientation. It would be contrary to the restrictive interpretation that should be made of Article 9(2)(e) GDPR to find that all data relating to the sexual orientation of a person fall outside the scope of protection under Article 9(1) thereof solely because the data subject has manifestly made public personal data relating to his or her sexual orientation. Moreover, the fact that a person has manifestly made public information concerning his or her sexual orientation does not mean that that person has given his or her consent within the meaning of Article 9(2)(a) GDPR to processing of other data relating to his or her sexual orientation by the operator of an online social network platform.[5]
Thus, the CJEU decided that Article 5(1)(c) GDPR must be interpreted as meaning that the principle of data minimisation provided for therein precludes any personal data obtained by a controller, such as the operator of an online social network platform, from the data subject or third parties and collected either on or outside that platform, from being aggregated, analysed and processed for the purposes of targeted advertising without restriction as to time and without distinction as to type of data. Moreover, Article 9(2)(e) GDPR must be interpreted as meaning that the fact that a person has made a statement about his or her sexual orientation on the occasion of a panel discussion open to the public does not authorise the operator of an online social network platform to process other data relating to that person’s sexual orientation, obtained, as the case may be, outside that platform using partner third-party websites and apps, with a view to aggregating and analysing those data, in order to offer that person personalised advertising.
This is certainly a first step towards limiting the exploitation and monetisation of data inferred from the Internet user’s digital footprint – but the CJEU certainly knows that this is a battle of David versus Goliath with the outcome still open to question. It is certainly relevant trying to prevent the widespread and indiscriminate collection of personal data which are not strictly necessary having regard to the purpose of the processing. But the problem lies precisely in the undisclosed purpose of the exploitation of personal data. For some time now, the great digital platforms have stopped being interested in the targeting of adverts: they are instead seeking to know who we are and build proximity with us. And why is that? Because the easiest way to get us to shift our position is to create a relationship of trust with us. And this can destroy democracy – which is, first and foremost, the contest between individuals in dialogue for the winning argument.[6]
II
The Court will again have the opportunity to rule on Article 22 GDPR (on the prohibition of automated decision-making, including profiling) in case C-203/22, whose Opinion of Advocate General Richard de la Tour was delivered on 12 September 2024. The facts of the dispute in the main proceedings are the following: CK was refused, by a mobile telephone operator, the conclusion or extension of a mobile telephone contract which would have required a monthly payment of EUR 10 on the ground that she did not have sufficient financial creditworthiness. CK’s allegedly insufficient creditworthiness was substantiated by an automated credit assessment carried out by Dun & Bradstreet Austria GmbH (“D & B”), an undertaking specialising in the provision of credit assessments. CK submitted a request to the Austrian data protection authority to obtain meaningful information about the logic involved in D & B’s automated decision-making. That authority granted that request – and D & B challenged the decision of the Austrian data protection authority requiring it to disclose the information requested by CK.
According to Advocate General Richard de la Tour, by its questions the referring court asks the CJEU, in essence, i) whether Article 15(1)(h) GDPR must be interpreted as meaning that “meaningful information about the logic involved” in automated decision-making includes information which is sufficiently complete to enable the data subject to verify the accuracy of that information and its consistency with the rating decision at issue, including the algorithm used for the purposes of that automated decision-making; and ii) whether and, if so, to what extent the protection of the rights and freedoms of others, such as the protection of a trade secret relied on by the controller, is capable of limiting the scope of the data subject’s right of access under that provision.[7] Thus, the CJEU now has the opportunity, in the D & B case, to elucidate i) to what extent trade secrecy can be invoked by the controller, as well as ii) what meaningful information about the logic behind an automated decision would have to be provided to the data subject.
The Advocate General rightly suggests the controller must fulfil its obligation to provide the data subject with both accessible and sufficiently complete information on the process that led to the automated decision in question and the reasons for the outcome of that decision – in particular, to describe the method used and the criteria taken into account and their weighting. The data subject must therefore be able to understand what information was used in the automated decision-making and how it was taken into account and weighted.[8]
But the situation becomes more difficult when trade secret comes into play. The trade secret argument is often invoked by data controllers to shield themselves from information obligations – allegedly because the method of calculating the credit score would be covered by trade secrecy. In other words, they claim that the algorithm used in profiling constitutes a trade secret within the meaning of Directive 2016/943 [on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure], thus refusing to communicate sufficient information about the underlying logic of the automated decision.
It is true that recital 63 GDPR states that the right of any data subject to access personal data which have been collected concerning him or her should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software. However, the WP29[9] warns that controllers cannot invoke the protection of their trade secrets as a pretext to deny access or refuse to provide information to the data subject.[10] Additionally, the doctrine has been advocating that the high levels of precision of the data mining and machine learning techniques have nothing to with the software, because it is the raw data and not the software that drives operation.[11]
However, Advocate General Richard de la Tour proposes that the concept of “meaningful information about the logic involved” should not extend to information of a technical nature, such as the details of the algorithm used, which a data subject is not in a position to understand without specific expertise. The controller is not required to disclose to the data subject information which, by reason of its technical nature, is so complex that it cannot be understood by persons who do not have particular technical expertise, which is such as to preclude disclosure of the algorithms used in automated decision-making. The right of access guaranteed by Article 15(1)(h) GDPR should not, in most cases, lead to an infringement of the trade secret on which the controller may legitimately rely.[12]
Why does Advocate General Richard de la Tour’s explanation give rise to questions? Firstly, because it apparently equates the rights of data subjects with the rights of data controllers as if they were on an equal footing – and clearly they are not – thus jeopardising the rebalancing of powers in the digital ecosystem to which the EU is committed. Given the lack of transparency of certain operations using machine learning technologies, for instance, how can the data subject be expected to ascertain the objectively verifiable coherence and causal link between, on the one hand, the method and criteria used and, on the other, the result obtained by the automated decision in question? The decision-making process is notoriously opaque, especially in terms of data collection and algorithm training, the selection of data for modelling and profiling, the effectiveness and margin of error of the algorithms, etc. Without an explanation of how data is handled in these terms, individuals are unable to defend themselves, to assign responsibility for decisions that affect them, to appeal any decision that harms them. Now, none of this can be achieved by ruling out the communication of the algorithms used in the context of an automated decision – even if this only happens in the context of a judicial process with due regard for confidentiality of the information disclosed in court and proportionality (see, for example, the solution established in Article 9 of the new Directive on liability for defective products).[13]
When one does not know why a result was reached, one cannot ascertain the changes needed to reach a different solution, nor is it possible to adequately and consistently challenge an unfavourable result. Thus, it is not enough to inform the data subject of the logic, importance and consequences of automated processing, because given the fundamental principle of transparency that underlies data protection, it is important to explain to the data subject how the profiling process or automated decision works. Although the Articles of the GDPR do not expressly mention an obligation to explain, this is necessary in the light of their systematic interpretation, especially taking into account recital 71 GDPR (obtain an explanation of the decision reached after such assessment and to challenge the decision), Article 5(1)(a) GDPR (lawfulness, fairness and transparency towards the data subject) and Article 12(1) GDPR (provide the data subject with information in a concise, transparent, intelligible and easily accessible form).
In any case, Advocate General Richard de la Tour himself supports the existence of “a genuine right to an explanation as to the functioning of the mechanism involved in automated decision-making” of which a person was the subject and “of the result of that decision”, in recital 67 of its Conclusions, even quoting the Managing Editor of this blog, Tiago Sérgio Cabral, in one of the first texts published on the subject.[14] This is the interpretation of secondary law that is compatible with original EU law, insofar as Article 8 of the CFREU recognises the data subject’s right of access to their collected data, precisely so that by exercising this right they can obtain its rectification. To rectify means “to set straight, align, correct, amend” – and, in a broader sense, “to respond to an assertion that is less than true in order to re-establish the truth of the facts.” When applied to automated inferences, this jus-fundamental dimension of rectification cannot be interpreted restrictively.
The key to inferred data lies in the reasonableness of the operation from which a given inference was made – ultimately, the justification without which it is impossible to effectively challenge the result of the processing. And this may depend on reconsidering data mining and exploration technologies to make them more intelligible to the holder of the inferred data. To this extent, it is not enough to inform, it is necessary to explain; and it is not enough to explain, it is necessary to justify.
[1] Alessandra Silveira and Tiago Sérgio Cabral, “ARTICLE 8 – Protection of personal data”, in The Charter of Fundamental Rights of the European Union: A Commentary, ed. Alessandra Silveira, Larissa Araújo Coelho, Maria Inês Costa and Tiago Sérgio Cabral (Braga: Universidade do Minho. Escola de Direito, 2024), https://repositorium.sdum.uminho.pt/bitstream/1822/93188/1/The%20Charter%20of%20Fundamental%20Rights%20of%20the%20EU%20-%20A%20Commentary.pdf.
[2] Sam Clark and Pieter Haeck, “Europe’s privacy patrol is spoiling Big Tech’s AI party”, POLITICO, 9 October 2024, https://www.politico.eu/article/europe-privacy-patrol-vengeance-block-ai-artificial-intelligence/.
[3] See Alessandra Silveira, “Editorial of March 2024 – On inferred personal data and the difficulties of EU law in dealing with this matter”, The Official Blog of UNIO – Thinking and Debating Europe, 19 March 2024, https://officialblogofunio.com/2024/03/19/editorial-of-march-2024/; “Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling)”, The Official Blog of UNIO – Thinking and Debating Europe, 10 April 2023, https://officialblogofunio.com/2023/04/10/finally-the-ecj-is-interpreting-article-22-gdpr-on-individual-decisions-based-solely-on-automated-processing-including-profiling/ .
[4] See Alessandra Silveira, “Profiling and cybersecurity: a perspective from fundamental rights’ protection in the EU”, in Legal developments on cybersecurity and related fields, ed. Francisco Andrade, Joana Abreu, and Pedro Freitas (Cham/Switzerland: Springer International Publishing, 2024).
[5] See Judgment CJEU Maximillian Schrems v. Meta Platforms, 4 October 2024, C‑446/21, ECLI:EU:C:2024:834, paragraphs 80, 81 and 82.
[6] On this theme see Yuval Noah Harari, Nexus – A brief history of information networks from the stone age to AI (Vintage Publishing, 2024).
[7] See Opinion of Advocate General Richard de la Tour, C-203/22, delivered on 12 September 2024, recital 39.
[8] See Opinion of Advocate General Richard de la Tour, recital 76.
[9] The body that preceded the EDPB under Directive 95/46/EC.
[10] See WP29, Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679, adopted on 3 October 2017, as last revised and adopted on 6 February 2018, https://ec.europa.eu/newsroom/article29/items/612053.
[11] See César Analide and Diogo Rebelo, “Inteligência artificial na era data-driven, a lógica fuzzy das aproximações soft computing e a proibição de sujeição a decisões tomadas exclusivamente com base na exploração e prospeção de dados pessoais”, Fórum de proteção de dados, Comissão Nacional de Proteção de Dados, no. 6, November 2019.
[12] See Opinion of Advocate General Richard de la Tour, recital 80.
[13] As of writing, the new Directive on liability for defective products is still pending publication on the Official Journal of the EU. However, the text of the political agreement is available here: https://data.consilium.europa.eu/doc/document/PE-7-2024-INIT/en/pdf.
[14] See Tiago Sérgio Cabral, “AI and the Right to Explanation: Three Legal Bases under the GDPR”, in Data Protection and Privacy: Data Protection and Artificial Intelligence, ed.D. Hallinan, R. Leenes and P. De Hert (Oxford: Hart Publishing, 2021), 29-56.
Picture credits: by Nao Triponez on Pexels.com.