Navigation auf uzh.ch

Suche

UZH News

Digitalization

Appeal for Human Rights

Developments in artificial intelligence are continuing apace, dramatically impacting our lives in the process. In his guest lecture at UZH, UN Special Rapporteur Philip Alston explains how human rights offer an important normative framework for regulating artificial intelligence.
Nathalie Huber
Alston emphatically warned against allowing the regulation of AI to fall in the hands of stakeholders from large tech enterprises like Google or Facebook.


Artificial intelligence (AI) promises to simplify many areas of human activity. The voice assistants Siri, Cortana and Alexa are based on artificial intelligence, as are Google Maps and park assist systems on vehicles. However, there is a flip side to all the benefits and potential of AI, which should not be underestimated. Lethal autonomous weapons like missile launch systems can identify and eliminate targets independently, enabling new forms of inhumane warfare. Social bots can manipulate the behavior of voters.

It is artificial intelligence that is the driving force behind digital advancements and it will have a much more pervasive impact on our day-to-day lives in the future. What far-reaching implications this will have for us is difficult to predict. Philip Alston spoke at UZH about the consequences of AI from a human rights perspective and the role that human rights should play in its regulation. Professor at the New York University School of Law and UN Special Rapporteur on extreme poverty and human rights, Alston gave his lecture at the UZH Digital Forum. This interdisciplinary conference addressed the legal and ethical aspects of autonomous security systems and was organized by the UZH Digital Society Initiative (DSI).

Unanswered questions

Alston spoke first about lethal autonomous weapon systems. Thanks to his previous mandate as UN Special Rapporteur on extrajudicial executions, he is considered a proven expert in this field. He stated that from a human rights perspective, lethal autonomous weapon systems should clearly be condemned; their use is a violation against human dignity and against the right to a fair trial. What’s more, many questions remain unanswered, such as who assumes political and legal responsibility for a mission and what criteria are permissible in identifying and combating targets. He had already advised the United Nations in 2010 to examine the issue of lethal autonomous weapon systems more thoroughly from a legal perspective.

The bias of algorithms

Alston also demonstrated by means of a few examples that artificial intelligence can have a discriminative effect. Self-learning algorithms can divide people into different groups on the basis of probability calculations and correlations. A person’s individual circumstances are not taken into account. Instead, the person is assigned on a statistical basis to a group with similar attributes, such as place of residence or income. This becomes problematic when, for example, judges in the USA use the probability calculations of an algorithm to rule whether or not to release a defendant on bail. According to Alston, this leads to even greater discrimination against groups that are already marginalized. 

Nonbinding ethical guidelines

Alston concluded his talk with one overarching question: “What role do the norms and institutional architecture of international human rights have in the development of appropriate regimes to govern the evolution of artificial intelligence?” He has observed a shift away from human rights toward ethical guidelines that are not legally binding, a development he considers questionable. The experts who create these ethical standards are frequently representatives of the tech industry. They claim the legal framework is too inflexible, whereas ethical guidelines can be adapted to the novel situations that are emerging through the application of AI.

However, Alston explained that ethical principles are not normative and are based on different philosophical theories that could be endlessly discussed. 

He emphatically warned against allowing the regulation of AI to fall in the hands of stakeholders from large tech enterprises like Google or Facebook. “The big tech firms now know more about us than our parents or partners do,” said Alston. According to Alston, companies that collect vast quantities of personal data and use it commercially are contravening an important human right – the right to privacy.

He urged anyone truly interested in the future viability of AI to look toward human rights. “Human rights offer very significant legitimacy, particularly to those working in the field of AI,” said Alston. Not only do they protect privacy and human dignity, he explained, they also guard against discrimination. Furthermore, human rights are the founding principles of international laws – legal action can be taken in almost all countries if they are breached.