Navigation auf uzh.ch

Suche

UZH News

KI und Fairness

Becoming Smarter Together

From checking loan applications to selecting job applicants, decisions are increasingly being made with the aid of artificial intelligence. While AI doesn’t make them any fairer, it can make us wiser, says ethicist Markus Christen.
Interview: Roger Nickl
Markus Christen
"Fairness issues in AI systems are mathematically inevitable," says Markus Christen, managing director of the UZH Digital Society Initiative (DSI).

Markus Christen, you explored the opportunities and risks of artificial intelligence in a large-scale study. How will AI benefit us?

Markus Christen: There are countless different predictions. But what they all agree on is that AI will change our lives. What this actually means is much less clear. We have to be aware that the term artificial intelligence covers very different technologies and appli­ca­tions – from industrial production to chatbots. In this sense, the term is very vague. In the TA Swiss study (see box), we mainly looked at automated decision-making using AI systems. These systems are used to process loan appli­ca­tions, to select job applicants and in self-driving vehicles, for example. The question is what impact it will have on our lives if AI helps us make decisions or makes them for us.

Why do we even focus on AI when it comes to making decisions?

Christen: The trend toward data-driven decision-making is very widespread. It is based on the ideal of rational, objective and fair judgment, as opposed to the some­times irrational and prejudiced decisions we humans tend to make. The question of whether purely rational decisions are also humane is debatable. If, for example, loan decisions in future are only made using AI based on rational criteria, this also narrows the options, which could be a problem. While humans do consistently make wrong decisions, this is not necessarily a bad thing for the overall system. As we know from science, sometimes taking the wrong path can unexpectedly lead us to the right destination.

If AI systems decide on issues that matter to our lives, we also hand over control and responsibility, which people find disconcerting. Where do you stand on this?

Christen: To me, the fear that we will lose control and that machines will decide every­thing in future is exaggerated. Because ultimately, AI systems are designed by us – they are our design decisions. The systems can't do this on their own, as they lack the required consciousness. What’s more, unlike us, they do not have desires. We deploy AI according to our desires, because it can complete certain tasks better than we can. Finally, even in the most autonomous cases, AI needs to be reviewed con­stant­ly. We need to continuously test whether the system is actually doing what it is designed to. This is a job that will always need to be performed by a human. A banker who relies on the recom­men­dations of AI to make mortgage decisions must ultimately know when they can trust the system and when they can’t.

Current AI technology is based on adaptive algorithms. This makes it difficult to comprehend how the technology works because it is con­stant­ly changing as it learns. How can we deal with this problem?

Christen: Transparency is paramount when it comes to artificial intelligence, which is why research in the field of explainable AI is essential. Incidentally, for some AI appli­ca­tions there should be a duty to state reasons. For example, the state can impose rules and obligations on citizens and must be able to justify why and how it is deploy­ing AI in such sovereign tasks.

AI is supposed to make decisions fairer. Do machines really have better moral standards than us?

Christen: Let me give you an example: In 2016, US firm Northpointe’s COMPAS algorithm made headlines. This AI system provides assessments of recidivism risk to US judges who have to decide on the early release of offenders. An investigative journalism organization called ProPublica looked into the way the system works and concluded that its predictions were racially biased. In fact, COMPAS predicted that African Americans were almost twice as likely to re-offend as white inmates, despite the fact that skin color was explicitly excluded as a criterion in the program. ProPublica therefore surmised that the developers were either negligent when writing the algo­rithm or that they were implicitly racist.

And was the criticism justified?

Christen: No, following ProPublica’s revelations, researchers were able to show that the problem is part of the system itself. Fairness issues in AI systems are mathema­tically inevitable. Because different fairness criteria – and these need to be defined when programming an algorithm – can sometimes be mutually exclusive. AI-based decision-making systems do not entirely eliminate bias, as the eradication of one form of unfairness automatically leads to other forms of unfairness.

So it’s an illusion to think AI can help us make fair decisions?

Christen: Yes. Philosophers have been grappling with the question of what is fair for centuries. As far back as Ancient Greece, Aristotle established that there are different forms of fairness – or justice. We have to recognize that AI doesn't free us from these problems. We can't evade the question of which type of fairness is relevant. Through a new research project that our team has launched with colleagues from other univer­sities, we want to raise awareness of fairness issues in the software community. The project aims to develop tools that teach developers in a fun way that building intelligent algorithms is not only about the IT aspects, but also about fairness.

What's the point of AI systems like COMPAS if they can’t actually make fairer decisions than us?

Christen: Systems like this can perhaps make us aware of our own preconceptions and systematic errors. AI can’t make decisions for us, but it can make recom­men­dations by offering a sort of second opinion. Fundamentally, I believe that AI has the potential to make us wiser. Precisely because it works very differently to us, and because it can pool and process huge amounts of data that we don’t have, it gives us the opportunity to hold a mirror up to ourselves. This is a positive thing, particularly when it comes to complex decisions. The crucial factor is that there must always be human-machine interaction. Machines that decide autonomously are not desirable.

What role will AI decision-making systems play in future?

Christen: Transparency and clear rules are crucial to positive dealings with artificial intelligence. Deployed effectively, AI could become a valuable companion in our thinking that acts like a good friend who can suggest things we wouldn’t have thought of ourselves. Together we can become smarter.

Does artificial intelligence make us dumber than we are? Or will it help to solve the problems of mankind? In the video, UZH experts provide answers to these questions. (Video: UZH Communikations/Information Technology, Mels)