Header

Search
AI-X Summit

Using AI Responsibly

Artificial intelligence (AI) is permeating more and more areas of life. Scientists at the University of Zurich (UZH) are developing new solutions, but they are also highlighting the consequences for our society. At the AI+X Summit, UZH researchers will be presenting their projects.
Theo von Däniken; Translation: Michael Jackson
A doctor studies data on a screen in a hospital setting.
How can AI assistants in medicine be better trained so that they can provide effective support to doctors? (Image: iStockphoto, andresr)

For many people, ChatGPT has now become a partner that they engage with every single day. Even doctors are making use of the knowledge that artificial intelligence possesses to make their diagnoses. “The AI models are pretty good at answering questions about common diseases,” explains Janna Hastings, a computer scientist and assistant professor of medical knowledge and decision support, who is researching the use of AI in clinical settings. “But when it comes to specialized questions, the kind of things that crop up in everyday clinical practice, they perform less well.”

The models do not adequately reflect factors such as gender and age or certain clinical pictures. This is often because the data is not specialized enough or is distorted, says Hastings. She’s researching how these two sticking points could be improved so that one day AI may really be able to ease the regular workload for doctors.

Portrait of Janna Hastings

The AI models are pretty good at answering questions about common diseases. But when it comes to specialized questions, the kind of things that crop up in everyday clinical practice, they perform less well.

Janna Hastings
Computer scientist

The big commercial AI programs like ChatGPT are not very suitable for this. What you need instead are highly specialized models that are designed to reflect the needs of a specific hospital or specific disease. At present, Hastings is examining existing models to identify potential weaknesses so that this information can be used to improve the models.

Training models better

“We’re identifying what additional data a model needs so it can deliver better results,” says Hastings. For example, it may need to be fed with data sets that better represent the actual demographics. Or it may be beneficial to evaluate the hospital’s existing files on corresponding clinical pictures. One important aspect to consider here is data protection. “Each case must be considered very carefully to establish how this data can be used.”

But even then, data will usually still need to be provided in a specific way, for example because it exists in different formats, such as text, images or laboratory reports. “Bringing all this data together is often a major challenge for the models,” says Hastings, “because they don’t have any information telling them what weighting to give to the different data sources.”

The use of systems to support clinical decisions is strictly regulated. At present, the current Medical Devices Ordinance in Switzerland only permits AI to be used in a supportive way. It is not allowed to make any decisions that affect the patient directly.

Hastings advocates this cautious approach: “We don’t know how the models will behave in all possible scenarios.” That’s why she thinks it’s vital that there’s always a human monitoring the AI. “Otherwise, there would indeed be the potential for AI to cause real harm to humans.”

View from a car; the driver has let go of the steering wheel; digital information is displayed on the windshield.
Is it a criminal offense to take your hands off the wheel and do something else in a semi-autonomous vehicle? (Image: iStockphoto, metamorworks)

Self-driving cars and the law

Legal expert Nadine Zurkinden explores questions of responsibility. The assistant professor in the Faculty of Law is particularly interested in researching the legal regulation of autonomous vehicles and the associated risks of criminal liability. Since March of this year, three areas of application for automated driving have been regulated in an ordinance in Switzerland.

This legislation means that currently, for example, it is possible in Switzerland to register vehicles with an “Autobahnpilot”, an autopilot for motorways. When these systems are used, drivers are permitted to take their hands off the steering wheel. Although cars fitted with such systems are already available on the market, none has yet been registered in Switzerland.

Even in these semi-automated vehicles, the human driver still always bears ultimate responsibility if a precarious situation occurs. However, the regulation covering this is slightly contradictory: it’s true that the driver may allow the vehicle to control itself to a certain degree on the freeway, which means they’re allowed to take their hands off the wheel and do not have to ‘constantly monitor the traffic’.

Portrait of Nadine Zurkinden

Permitted risk means: If a manufacturer complies with all the requirements, it will not be liable to prosecution, even if it knows that its vehicle may be involved in fatal accidents.

Nadine Zurkinden
Legal expert

But the regulation also states that the person at the wheel must ‘be in a position to enable them to operate the vehicle again themselves at any time’ and must not engage in any activities that might delay this. “It remains unclear how quickly the person needs to regain control of the wheel and what other activities, if any, they are permitted to engage in,” says Zurkinden.

Finding the right balance

With her research, Zurkinden is shining a light on these areas, for which the rules are yet to be fully established, and highlighting where gaps still exist. She is keen to find a balance so that the law firstly protects people from any negative effects without curbing innovation too much. “After all, the manufacturers tell us that these assistance systems are ultimately designed to lead to fewer accidents, injuries or fatalities on the roads.”

In essence, the key question is what level of “permitted risk” society is willing to bear, explains Zurkinden. Every technology carries risks and, even when humans drive vehicles, accidents that cause deaths and injuries do still occur. The concept of “permitted risk” was introduced into criminal law as part of industrialization: How great can the risk be that a certain technology will cause harm to people? “In relation to vehicles, this means that, if a manufacturer complies with all the requirements, it will not be liable to prosecution, even if it knows that its vehicle may be involved in fatal accidents,” explains Zurkinden.

Observing ethical principles

An important point is how AI systems are trained and programmed to perform these tasks. Essentially, the same principles as apply to humans should apply here too. This is the case, for instance, when the technology needs to decide between different options in which people could be harmed. For example, if the vehicle has to swerve, endangering either the driver or other people in the process.

There can be no discrimination here – irrespective of whether a human or a machine is controlling the car, states Zurkinden. The programs must not treat certain groups differently from other groups, for example based on age. At the same time, the principle that a human being cannot be obliged by law to sacrifice their own life also applies.

Portrait of Claudia Witt

The advantage of universities is that they can approach critical issues with greater independence and without any entrepreneurial intentions.

Claudia Witt
Member of the Digital Strategy Board

Establishing trust in AI

“These two examples show the wide range of research into artificial intelligence that is taking place at UZH,” explains Claudia Witt, Professor of Complementary and Integrative Medicine and a member of the Digital Strategy Board at UZH and one of the moderators at AI+X. With the newly created UZH.ai Hub network, the university wants to link together these different aspects to an even greater extent and make them more visible to the outside world. “With the many disciplines it covers, UZH is the ideal place to consider the development and use of AI comprehensively across multiple disciplines,” says Witt.

It’s not just about the development of new technologies, but also about what they are used for and what consequences they may have. “This requires technical competence, as well as a broad level of expertise from other areas, such as law, ethics or business,” says Witt.

If AI applications are to be accepted by society and their potential exploited for everyone’s benefit, it’s essential for the solutions to be trusted. Research into a responsible framework and the consequences of using AI is an important element that will enable this trust to be established. Witt says: “The advantage of universities is that they can approach critical issues with greater independence and without any entrepreneurial intentions.”