Adjudicating a discriminatory algorithm · European Law Blog – Go Health Pro

Case note Stichting Clara Wichmann v. Meta Platforms Ireland Ltd.

This blog was originally published in Dutch at Nederland Rechtsstaat.

 

The 18th of February the Dutch College for Human Rights (further: College) rendered an important decision regarding the status of fundamental rights and platform regulation. The College is specialized in fundamental right cases. The decisions of the College are not binding but carry great weight and judges often follow the decisions of the College. The College’s decision on the regulation of the algorithm deployed by Facebook is therefore likely to be followed by a judge.

The case concerns the right to non-discrimination on social media platforms. Social media platforms have been subject to various fundamental rights cases. The most famous among which are perhaps the cases brought on by Max Schrems. These cases concerned data sharing by Facebook (now: Meta) to data servers outside the EU. The case currently under discussion, however, goes a step further in human rights protection.

It is the first case where a European Human Rights College has confirmed a violation of human rights. Specifically, the College considered that Facebook’s algorithm violated the principle of non-discrimination. It did so by displaying job vacancies predominantly on historical stereotypical gender roles. This decision is interesting for various reasons as it takes a new step in human rights protection in the relationship between user and platform. This decision adds to the protection of fundamental rights under the EU’s digital agenda. The protection furthermore comes from an unexpected angle. Rather than basing its decision upon ‘new’ legislation such as the DSA or AI-Act, the College uses the national implementation of Directive 2000/78 on equal treatment in employment and occupation. This decision, however, also shows the difficulty of adjudicating complex algorithms, as the decision fails to engage with the functioning of such algorithms.

 

Facts of the case:

The case concerned the advertisements shown by Facebook, part of the Meta Ireland group. Two associations, the Global Witness and Clara Wichmann association, complained about gender discrimination. According to research done by these associations, the algorithm deployed to determine to what users were shown what job advertisements by Facebook, discriminated between men and women. The vacancies for secretary were predominantly shown to female users (86% in 2022 and 97% in 2023). The vacancy of mechanic however reached a predominantly male audience (96% in both 2022 and 2023). The least differentiating result was the vacancy  for primary school educator (93% female users in 2022 and 85% in 2023). The plaintiffs therefore argued that this perpetuated stereotypes and resulted in discrimination (para. 7.5). The plaintiffs therefore considered this a violation of equal treatment laws (Directive 2000/78).

 

Differential Treatment:

The College finds that the research conducted by the plaintiffs indicates that Facebook strongly differentiates on the basis of gender. According to the Directive, differential treatment based upon gender is only allowed in very limited circumstances. An example of which is sport matches where gender can be divided to ensure equal competition. Alternatively a company may introduce a neutral policy to achieve a legitimate aim. If that policy however causes discriminatory effects, it may only be conducted when the objectives are legitimate and the measure is proportionate.

The Global Witness had created 9 job vacancies. The only criterion for advertisement formulated by Global Witness was that the viewer was an adult living in the Netherlands. The results strongly indicated that there was a gender bias. E.g. the advertisement audience chosen by Facebook’s algorithm consisted of 97% women. The defendant did not share its algorithm with the College (para. 7.6). Meta, however, argued that gender was not always part of the algorithms decision making. Rather, the decision to show an advertisement was generally based upon the clicking behavior of the user. The College considered that it could not be determined that gender was always a criterion in the algorithm’s decision. The College, however, considered the clicking behavior of users may lead to a single-sided result. It therefore constituted indirect, rather than direct differentiation (para. 7.9). The College in its argument seems to lack understanding of how (complex) algorithms function.  An algorithm is highly unlikely to (pseudo)randomly apply criteria. Rather it applies all criteria to all decisions, but has a different weight attached to each criterion. In the case of Meta it seems that the algorithm attaches a strong weight to gender, which is only in exceptional cases ‘overruled’ by the other criteria generated through clicking behavior, entered degree etc.

The College agrees with the defense of Meta that the criteria used are dynamic (para. 7.7). It seems, however, that the College is contradicting itself at this point. Elsewhere in its decision, the College states that if the algorithm remains undisclosed and no evidence rebuts the prima facie indication, the risk of non-disclosure falls on Meta. Yet here, it appears to accept Meta’s claim that gender was not always a factor. Theoretically, it can be argued that the advertisements were shown to the opposite gender as well, and therefore the algorithm does not take gender into consideration at every decision. It is extremely unlikely that this is the case, as algorithms do not randomly apply and ignore criteria.

Whilst Meta does not share the exact coding of Facebook’s algorithm, it is likely that they use a kind of neural network algorithm. These type of algorithms can recognize patterns and process complex information. The processing of this information occurs through various neurons. These neurons are connected to each other in different layers. The result depends on the input and on the relationship between the different neurons (see image below).

As demonstrated above, some features may have a stronger weight leading to the output (advertisement) being shown to a female user. In the schedule above the middle neuron has a strong relationship with the output of a female user. The question is therefore what criterion does this middle neuron represent. This could literally be the criterion “gender” therefore making it direct differentiation. It could also be a more “neutrally” defined criterion such as “shopping history”. This type of criterion can indeed be considered as indirect differentiation, as it is prima facie a neutral criterion. However, this criterion may not be truly neutral either. Shopping history can be fairly neutral, such as groceries – potentially people who buy milk are more interested in secretary positions and happen to be predominantly female. However, shopping history can also be defined as “those who have recently bought female hygiene products”. In a neural network, this criterion can be strongly linked with other criteria such as “purchased ladies’ shoes”. In theory, both men and women can purchase these items. Therefore, even a combination of such criteria does not constitute a direct reference to gender.

The combination of these criteria with a high-weight attached demonstrate a legal difficulty; namely the law is based on binaries. A measure is either directly differentiating in its intention or differentiates unintentionally through its effects. The line between direct and indirect differentiation is fading with regard to algorithms and big data. Algorithms and social media segment users into groups to (potentially) optimize advertising. These segments may not directly include gender (or any other protected characteristic) but can statistically exclude everyone else. Concretely the shopping history of persons who purchased feminine hygiene products and ladies clothing – does not reference gender. It is, however, unlikely that the male representation in this group is statistically relevant. Raising the question: at what point are the neutral characteristics so specific and weighed so strongly – that it constitutes direct differentiation?  This question is currently not discussed by the College but becomes more urgent regarding algorithmic decision making.

The College’s approach, however, which assumes gender may have been a factor at times, is unlikely to reflect reality. All criteria are always part of the consideration, rather its weight might be different depending on the input. Direct differentiation is only lawful in a handful of specifically listed situations. E.g. when an actor is sought, the casting can specify the required gender. The College, however, decided to consider the algorithm’s behavior as indirect differentiation which can be lawful if the measure has a legitimate aim which can achieve the objective and is proportional.

 

Lawfulness of indirect Differentiation

Indirect differentiation is considered lawful when the measure has a legitimate aim, can achieve the objective and is proportional. The College’s decision in this case is that there is a legitimate and non-discriminatory aim; namely high revenue for advertisers (para. 7.11-7.12). Similarly the College considers that the algorithm deployed is able to achieve the aim of improving advertisement effectiveness (para. 7.13). The College, however, does not ask for the effectiveness to be proven. The College  accepts that the algorithm is effective to achieve advertisement optimization. Whilst algorithms in general can be an effective measure to achieve efficient advertisement, there is no guarantee. In comparison, the College assumes that because a company safety policy can be effective, therefore all company safety policies are effective. In other cases, however, the College does investigate whether the specific measure in question is effective, rather than adopt a generalized approach. To determine the effectiveness of the algorithm furthermore does not require the actual algorithm. Its effectiveness can be indicated by click-through and conversion rates. These statistics can indicate effectiveness by showing what percentage of viewers clicked on the advertisement.  The College could have asked for these, or other criteria to indicate the effectiveness of the algorithm. Thereby placing a stronger burden of proof on Meta to test the quality of their algorithms.

The College finalizes its decision by considering a lack of proportionality. It considers that Meta has not shown sufficient evidence of neutralizing unintended discriminatory side effects (para. 7.17). Because such measures have not been taken, the College considers the algorithm to be unproportionate. Due to lack of proportionality, the algorithm is considered to unlawfully differentiate between genders. The decisions of the College are not binding but generally respected and therefore, there is no direct obligation for Meta based on this decision. The association Clara Wichmann is, however, considering going to court.  The decision by the College carries substantial weight in national proceedings.

 

General implications on the EU Digital Agenda

The decision by the College has some interesting implications for the EU’s digital agenda. The EU digital agenda identifies the risk that social media has on democracy. In response, the Digital Services Act (DSA)aims to regulate online platforms, including social media platforms. The DSA aims to promote and protect the fundamental rights included in the Charter. The difficulty is that the DSA refrains from placing high burdens on platform providers.

Providers have been given two duties when concerning digital advertisements. The first is that of providing transparency towards users, why certain adds are shown to them. It is therefore interesting that the College chose not to demand more insight into why the algorithm selected such a high number of gender-specific users. The DSA further prohibits that adds are shown based on profiling with the use of special category of personal data. Gender is not included as such a category, it is therefore not prohibited by the DSA. Generally, the duties under the DSA are not considered that strict. The decision by the College however shows that the duties of service providers are not merely regulated by the DSA. If this decision is upheld by the courts (and potentially CJEU), it may lead to a change in platform regulation. Whereby the DSA is more focused on accessibility rights and prevention of unnecessary takedowns, whilst equality legislation provides the basis for judging the algorithms.

This development is interesting for two reasons. The first is that it shows that social media can be held accountable through ‘old’ laws. New laws may not always be necessary with regard to technological development.  Second, it may lead to a split in the law whereby the DSA primarily protects user’s freedom of speech through accessibility and unnecessary takedown protection. Other rights, such as non-discrimination, are protected through other laws. The decision, if upheld, furthermore provides the foundation for more fundamental rights protection in other areas. E.g. it will be interesting to see if Germany takes up a similar approach with regard to the social media algorithm deployed by X. As seems that X might have interfered with German elections on behalf of a limited group of parties. The DSA regulates political advertisements, but these are paid advertisements. The US presidential elections indicated that algorithmic manipulation by X can have a powerful impact. The exact developments of such procedures, however, will take time.

Another interesting development from this case might be the impact on the AI Act. This Act is not applicable to the case against Meta. The AI Act was introduced in 2024 and the advertisements in question were posted in 2022 and 2023. The AI Act prohibits the deployment of subliminal AI. The definition of subliminal is currently unclear. Cases such as this may be used to determine when an algorithm can be defined as subliminal. The College clearly identifies a harm; perpetuating (discriminatory) stereotypes. Furthermore, the effects of the algorithm are invisible for the individual user, as users will generally be unaware to establish why an algorithm shows them specific advertisements.

Annelieke Mooij is an assistant professor of administrative and constitutional law at Tilburg Law School. She is interested in law and technology and writes on algorithms, virtual reality and FinTech.

Leave a Comment

x