A Matter of Coherence? – EJIL: Talk! – Go Health Pro

Editor’s note: This post is part of the EJIL:Talk! Symposium on ‘Expanding Human Rights Protection to Non-Human Subjects? African, Inter-American and European Perspectives.’

Advocates of so-called robot rights argue for the inclusion of artificial intelligence (AI) in human rights protection from two fundamentally different perspectives. The first argument for coherence is as follows: If courts treat corporations as humans, by granting them human rights protection, they should do the same for robots. Corporations operate through human agents, boards of directors, and shareholders, with corporate governance providing a structured decision-making process. In contrast, robots and AI systems function based on pre-programmed algorithms, machine learning, or autonomous decision-making without comparable human agency. This concept of coherence is thus not based on the comparability of the internal mechanisms of corporations and AI, but rather on functional considerations. From this perspective, the original reasoning for corporate legal personhood – treating corporations as legal entities to enable their functionality and economic benefits – should be applied to robots.

When compared to human beings as the primary bearers of human rights, there is a similarity between AI and corporations. Neither AI nor corporations ‘breathe’ or experience suffering in the same way that humans do. A corporation, defined as a for-profit entity with limited liability and independent legal personality, is an artificial person – just like robots or, more generally speaking, autonomous AI systems. As illustrated in the introduction to this Symposium, the European Court of Human Rights (ECtHR) generally respects corporate legal form and grants corporations different rights under the ECHR (inconsistencies noted here). In contrast, the Inter-American Court of Human Rights rejects ‘corporate human rights’ and only provides derivative protections. The ECOWAS Court and the European Court of Justice align with the ECtHR, while the African Court on Human and Peoples’ Rights has strengthened corporate accountability but has not clearly ruled on granting corporations human rights protection. Thus, in the Inter-American and African systems, corporate-related coherence demands for extending human rights protection to AI seem less pressing but are already relevant in Europe.

However, the following second demand for coherence made by robot rights advocates applies to all regional human rights systems. This claim goes as follows (p. 579): ‘(…) (I)f we determine at some point that robots may possess moral personhood, based on certain criteria such as rationality, intelligence, autonomy, (machine) consciousness, self-awareness, and sentience, then human beings might be forced to recognize their moral and legal rights (including their “human rights”). (…) (Robots, P.W.) could no longer be our mindless tools; instead, human beings would be morally obligated to recognize their status and rights and to treat them accordingly.” This coherence demand is more fundamental because it, I quote (p. 1), ‘probes the boundaries of and taken for granted assumptions about qualities supposed to be quintessentially human’. In altering our understanding of the human element in human rights protection, this claim affects all judicial institutions dealing with human rights protection, including all regional human rights courts. While drawing parallels between corporations and AI refers to functional reasons to expand human rights protection, relating human rights protection to properties of AI such as rationality or autonomy points to moral grounds. 

In this post, I will address both types of coherence demands. First, I will examine the parallels often drawn between human attributes and AI, arguing that the moral justification for granting robot rights equivalent to human rights is unconvincing for several reasons. I consider a functional perspective on the question of necessary AI protection to be more persuasive. From a functional perspective – one that views human rights protection not necessarily as morally grounded but as a politico-legal concept – the recognition of AI entities as subjects of human rights should be assessed based on whether it serves as a tool or an obstacle in addressing the governance challenges posed by AI. In the second part of my post, I will discuss this perspective and take a comparative look at potential protection approaches for AI and corporations.

A case for robot rights? Moral and functional-political approaches

Posthumanism and the moral status of AI entities

I will start with the question of moral obligations to grant human rights protection to AI due to AI qualities that are comparable to human qualities. Morally grounded claims for robot rights go in different directions and relate to different kinds and qualities of AI. Some are of current interest and relevance because they deal with AI that already forms part of some peoples’ daily lives. Think, for example of humanoid social robots that, as empirical studies prove, provide emotional support to humans such as older adults living alone. Alternatively, think of generative algorithms used to automatically generate creative artifacts like music, digital artworks, and stories. A painting machine or a translation robot learns by analysing numerous examples and deriving a general pattern or rule from them. Once the learning process is complete, it can independently apply these insights to new situations. Some academic voices advocate extending human rights protection to AI of this kind. For others, the idea of granting the right to marry to a humanoid robot – and consequently to apply the human right to family life to human beings and robots is a realistic scenario for our time. The same holds true for claims to overcome the anthropocentric nature of current copyright law that is perceived as disqualifying AI creative processes as being of a lesser degree and importance than humans (see here). In line with this is the claim to ‘let the robot speak’ (see here for the quote) and to apply the protective scope of freedom of expression to machine learning generated utterances.

A future scenario deals with what is called ‘Artificial Super Intelligence’. This is the idea that one day, an AI system has a self-aware consciousness that is able to solve problems, learn and plan for the future. It also includes human-like cognitive abilities and the display of personality, including the ability to learn like humans and to possess a sense of imagination, thereby thinking beyond solving problems to consider future needs. In this scenario, AI will have superpowers, exceeding current human intelligence levels enabling it to train other computers while being aware of its own limitations (for characteristics see here). Advocates of robot rights are already thinking about what the ‘life’ of future beings such as intelligent synthetic humanoid robots could look like alongside the so-called ‘‘baseline’ human’ (quote from here). Some claim that future AI will not only be entitled to human rights, but will have to have a higher moral and legal status than human beings. This line of reasoning is based, I quote (see here, p. 181 et seq.) ‘on a particular view of personhood according to which cognitive capabilities (e.g., rationality, intelligence, autonomy, self-awareness) are most decisive in determining the moral status of different species, such as human beings and animals, as well as within each species.’ Applied to existing human rights documents, it is argued that rights, such as Article 2 of the European Convention on Human Rights, stating that: ‘[elveryone’s right to life shall be protected by law’ or Article 14, referring to discrimination based on ‘status’ are already open to synthetic persons and statuses. From this perspective, denying human rights personhood to an ‘intelligent being provably in possession of the necessary qualities-including sentience, self-awareness, moral agency, and narrative identity-would render the basis of our understandings of personhood meaningless.’ (see here, p. 483). The posthumanist perspective inherent in robot rights claims related to existing and future AI has in common that it radically questions traditional views on the human element in human rights protection.

I’m skeptical of morally grounded robot rights based on AI properties. My concern is that AI, both existing and future, inevitably lacks inherent characteristics and skills. I’m not talking about cyborgs (humans augmented by mechanical components), but AI without human corporality, like algorithms, or androids (made from a flesh-like material to look human but without human parts). Neither generative algorithms nor super-intelligent humanoid robots, which could form long-lasting relationships with humans, can exist or survive without human input. AI is human-made and dependent on humans. As stated, AI systems ‘are never fully autonomous but always human-machine systems that run on exploited human labor and environmental resources. They are socio-technical systems, human through and through—from training data to societal uptake after deployment’. Developers of social robots actively choose to integrate them into human social environments and design them to look and act like humans. Both designers and users then ‘tend to anthropomorphize such robots as they interact with them, ascribing to them anthropomorphic features such as personality, aliveness, and so on.’ (quote from here, p. 2049).

This shows that robots’ human-like abilities originate from human decisions, even if AI’s decisions are unpredictable and difficult to trace, giving AI systems their so-called ‘black box’ nature – and even if recent studies indicate that AI agents are capable of striving for their own survival and can ‘strategically introduce subtle mistakes into their responses, attempt to disable their oversight mechanisms, and even exfiltrate what they believe to be their model weights to external servers.’(quote from here). This dependency from external origin and input contrasts with the moral justification for human rights for human being, which negates that human rights depend on external recognition or group membership. While humans come to life through procreation and are also influenced by external factors like education and material resources, human properties differ from AI’s external influence since the human capacity for choice and freedom from domination is inherent, not externally programmed (on this here). I agree with the view (paraphrase from here, p. 153) that relying on ontological properties to determine psychological/moral personhood, moral status, and moral rights is problematic, as there is significant disagreement about which qualities are essential for moral personhood and how the presence of these traits in an entity can be empirically confirmed. My point is not that AI entities are not like humans, but rather that they lack any properties at all. Animals, however, have inherent properties, and from a moral perspective, differences between animals and humans might be less significant (on this here and here).

Another restriction arises when we look at human-AI relationships (on the details of the ‘properties-based approach’, as compared to this kind of ‘relational approach’ to AI rights see here, p. 16 et seq.). As stated in AI ethics (on the following quotes see here), it seems prudent to take a critical stance towards both, a – ‘naïve instrumentalist’ view on robots, as well as an ‘uncritical posthumanist’ view. The first questions the understanding of robots as mere machines and instruments to human purposes, due to the fact that when interacting with robots, the psychology of users does indeed lead to ‘perceiving the robot as a kind of person.’(ibid). The problem with an ‘uncritical posthumanist’ focus on the otherness and social-cultural construction of robots leads, I quote, ‘to ignoring their origin in human and material practices.’ (ibid) In line with what I described as a lack of inherent qualities, this relates to the fact that robots might be socially relevant, but they are also machines made by humans. (ibid) A way out of these extreme positions is to accept that robots are instruments; but that they are what is called ‘instruments-in-relation’ connected to humans and the social-cultural fields in which they operate (ibid). To put it succinctly: When humans care about a thing, this thing only has a ‘derived moral status’. This understanding of AI as ‘instruments-in-relation’ points to human rights protection needs: It is the human interest in being able to enter into and to maintain these kinds of social relationships with an AI entity.

From this perspective, human rights are relevant to realise the human interest in AI, for example, in marrying a love robot or in appointing a care robot as heir. In this scenario, as it is practiced by the Inter-American Court of Human Rights with regard to the protection of corporations, the human rights protection of robots is only of a reflexive and derivative nature. There is no morally grounded reason to acknowledge AI as an independent human rights subject in this constellation.

Functional rationale for human rights subjecthood of AI and relevant considerations

As mentioned in the introduction, the search for inherent qualities of AI that justify making a moral case for human rights protection of AI is only one approach. In looking for reasons that justify this type of legal protection, we can go further and address functional, instrumental goals pursued with a rights-based approach, since human rights can be ‘conceived as moral rights or as politico-legal concepts’ (quote from here, see also here). In this regard, the comparison to corporations, which are also afforded human rights protection based on purely functional considerations, is pertinent. In the following, I do not want to draw final conclusions as to whether it might make sense and be justified at a certain point in time to recognise AI rights at the level of human rights for functional – or ‘utility’ (p. 159) –  reasons. What I want to do is to highlight the complex spectrum of considerations that have to be taken into account in making this decision.

(1) The need for a clear picture of human rights threats resulting from AI

Before granting human rights protection to AI, we need to have a very clear understanding of how AI works and how the autonomous aspect of AI actions can cause human rights threats and violations. This consideration is not new. In 2020, Diane Desierto referred in her EJIL:Talk! post to the the threefold threat scenarios to human rights associated with automation and AI, namely the ‘challenges to the formation and communication of individual consent’, ‘the challenges to autonomy, personhood, and self-determination’, and the challenges to our human dignity, understood as ‘our equal moral worth as persons’. Since then, nothing has improved or relaxed regarding these threat scenarios; on the contrary, the situation has worsened with technological advancements and the widespread use of LLM models like ChatGPT. At the end of 2023, the Office of the UN High Commissioner for Human Rights published a ‘Taxonomy of Human Rights Risks Connected to Generative AI’. The report clearly illustrates how generative AI can – for example – endanger the freedom from physical and psychological harm, the right to equality before the law and the protection against discrimination, the right to privacy and freedom of expression. It highlights that generative AI models often overrepresent dominant cultural groups (e.g., white, Western, male), leading to the misrepresentation or underrepresentation of others, reinforcing harmful stereotypes, biases, and limiting marginalized groups’ control over their identities online. In an ‘International Scientific Report on the Safety of Advanced AI’ published in May 2024, a group of 75 AI experts from all over the world state that developers still know little about how general-purpose AI models function, as these models are trained rather than programmed. With trillions of parameters, their inner workings are largely opaque, even to developers. While techniques to explain and interpret these models exist, this research is still in its early stages. The experts refer to existing technical approaches that help to reduce human rights risks, for example methods for reducing model bias or methods that make general-purpose AI less likely to respond to user requests causing harm. However, this hope is limited since, as they admit, there are no existing techniques that currently provide quantitative guarantees about the safety of advanced general-purpose AI models or systems (p. 83). From this perspective, it seems more than premature to think about the human rights capacity of an entity before its potential to violate human rights can be clearly understood.

(2) Regulation and the correlation between legal personhood and human rights

This leads to the second aspect we should take into account. The legislative decision to recognise that AI entities are legal persons and even subjects capable to hold human rights should only be taken once the legal boundaries and responsibilities of and for these entities have been clearly defined. Ex-post regulation is possible, as demonstrated by current initiatives aimed at promoting corporate accountability for human rights violations, which followed the recognition of corporate rights entitlement – whether through binding legal obligations like the EU CSDDD or through adjusting investment protection to address human rights. However, it is preferable to find effective approaches and political consensus to regulating AI before granting it legal empowerment (on the interplay between human rights entitlement and corporate accountability see the Symposium post by Michael Waibel and Rebecca McMenamin). However, this issue is highly complex, not only from a technical standpoint but also from legal and political perspectives, in particular with regards to the allocation of responsibilities for autonomous AI in the face of a multitude of actors. Very similar to the way we deal with corporate responsibility on the level of domestic rules on liability, the idea came up to grant legal personhood to so called ‘ePersons’, thereby making, ‘The Artefact (..) a Liability Subject’. In 2017, the European Parliament envisioned creating a legal status for robots, granting the most advanced autonomous robots the status of ‘electronic persons’ responsible for any damage they cause, and possibly extending this to cases where robots make independent decisions or interact with third parties. At the time, this proposal was met with strong criticism from the AI scientific community, and was not taken up from the EU-Commission. A key criticism was that a new liability subject can only be considered if it has the financial resources to cover damage claims in order to avoid an externalisation of risk to the benefit of parties protected against liability by the new entity, i.e., manufacturers and operators (see here, p. 8). Opinions differ on the question of whether and how this problem could be solved. One approach discussed is to introduce compulsory insurance for AI, financed by all the parties involved in the production effort and employment of an AI-agent, such as the product designers, software developers, manufacturers, and even its owners and users. We will see if (or when) the ePerson approach will be taken up in legislative processes again.

The key question is whether recognising the legal personality of AI at national or regional level would automatically lead to AIs being granted human rights protection at the international level (see, for a similar consideration Tim Eicke’s Symposium post). Let’s consider this by means of a thought experiment. If, for example, e-personality would allow AI entities to enter into contractual relationships, to own assets, and to become the party to legal proceedings, wouldn’t this status on the level of domestic or supranational law have to have consequences for international human rights obligations of states? A priori, there is no apparent reason why it should be allowed for states to expropriate an e-person without compensation or to deny it the right to a fair trial. In this constellation, expanding human rights protection to AI could have disciplinary effects on states and foster the international rule of law (see my reflections on this topic with regards to corporations here). However, as I will explain in my third aspect for consideration, I don’t think that this would be enough to justify the human rights status of AI.

(3) A quoi ça sert? Defining political objectives for recognising human rights subjectivity

A point of reference for potential political objectives and utility considerations is, again, the fundamental rights protection of corporations in Europe and its origins. The political decision to integrate corporations in the European human rights system was not taken by the Court itself, but, as described in the Symposium post of Tim Eicke, by the member states of the Council of Europe when drafting the Convention and the first Protocol in the 1950s. Article 34 ECHR allows, with an open wording, applications from any ‘nongovernmental organisation’; Article 1 of Protocol No. 1 to the ECHR explicitly expands the right to property to ‘(e)very natural or legal person’. According to the travaux préparatoires, the drafters understood the European Convention from the very beginning as a commitment to the liberal state of the West against new dangers in the shape of a rapidly expanding communist East (on this see here). As I have elaborated elsewhere, the European approach to protecting the fundamental rights of corporations was – and continues to be – part of a political and economic integration project. Is there similar interest to grant this status to AI, too? In Europe – or in other world regions?

In March 2024, the United Nations General Assembly has reached a first milestone on the way to a global approach and adopted the first resolution on the topic of Artificial Intelligence (AI). However, as outlined in a recent blog post, the process of drafting this resolution has revealed significant political divides between world regions and individual states regarding AI: For instance, the United States, home to several leading AI corporations, emphasised advancing business interests to drive innovation and revenue, highlighting AI’s transformative potential for global industries. In contrast, the European Union focused primarily on its stringent data privacy and user protection standards, as exemplified by the recently introduced EU AI Act. States from the Global South, on the other hand, raised concerns about accessibility and inclusion in AI advancements, emphasising that benefiting from AI requires internet access – a resource unavailable to 33% of the global population. This brief glimpse into the diverse negotiating positions leading up to the adoption of the UN resolution underscores the stark differences in political perception of AI.

We do know that the use of AI can have very beneficial effects for societies all over the world. To give a concrete example: It is reported that in the Zanzibar archipelago of Tanzania, rural farmers are using an AI-powered app called Nuru, which operates in their native Swahili language, to detect a harmful cassava disease before it spreads. In addition to social benefits of this kind, the use of AI has obvious economic benefits: It is estimated that AI applications could contribute up to 136 billion USD in economic benefits to four sub-Saharan countries (Ghana, Kenya, Nigeria, and South Africa) by 2030. These examples show that fostering human rights-friendly development and use of AI is certainly a political objective shared by most states in the world. Even with the overarching goal of fostering innovation for the benefit of individuals and the economy, this does not serve as a functional or political justification for granting autonomous human rights to AI entities. In my view, the human-centered, derivative protection of AI is sufficient to achieve this objective. Within the European context, corporations developing AI are already safeguarded by fundamental rights, making the creation of a separate legal entity unnecessary. Similarly, in human rights systems such as the Inter-American system, the protection of innovation can be mediated and realised through the protection of human beings. Simply put, if an AI developer possesses the human right to work, to create, and to conduct business, there is no need to extend these rights to AI as an independent subject.

As things stand, I see no moral or legal-political justification for extending human rights protection to robots or other AI entities. However, I acknowledge that the regulation of AI is still in its infancy and that the concept of a legal ePerson may one day become a legal reality on the domestic or regional level. In such a scenario, (European) human rights systems that have already opened their doors to legal persons might face demands for coherence and must be well-prepared to address this question.

Leave a Comment