Navigation auf uzh.ch

Suche

UZH News

Law

“AI is no wonder tool”

Artificial intelligence can make our lives easier in many ways. But the technology also harbors many dangers. Legal scholar Florent Thouvenin is working with academic partners from across the globe to develop ideas about how AI could be optimally regulated.
Roger Nickl; Translation: Gemma Brown
Chat GPT
“Chatbots such as ChatGPT can compute a great deal of information very quickly – but they can’t understand or think and they don’t have a will of their own”, says Florent Thouvenin, professor of information and communications law. (Picture: istockphoto)

When US firm OpenAI launched its chatbot, ChatGPT, late last year, it took the world by storm. Many people were surprised at what is possible with artificial intelligence. For example, the chatbot can be used to generate texts of varying levels of sophistication and to summarize scientific papers, as well as to write code and translate it into another programming language. The initial euphoria about the potential to make work easier was soon followed by alarm bells. Because while the chatbot can simulate intelligent behavior, it sometimes comes out with utter nonsense.

Given the rapid advancement of artificial intelligence and the societal risks associated with the powerful technology, an open letter from the US Future of Life Institute has called for a six-month pause in the training of AI systems that are more powerful than GPT4.  This is so that the software can be made more transparent and trustworthy. Signatories of the public statement include prominent figures, such as Israeli historian and author Yuval Harari, and entrepreneur Elon Musk.

Chatbots can’t think

One person who didn’t sign is Florent Thouvenin. The UZH legal scholar has been working for many years on the impact of algorithmic systems and artificial intelligence on society and the associated challenges for the legal system. Thouvenin is professor of information and communications law and heads up the UZH Center for Information Technology, Society and Law (ITSL). He is skeptical about the pause called for in the open letter. “AI is no wonder tool,” says the legal scholar. “Yes, chatbots such as ChatGPT can compute a great deal of information very quickly – but they can’t understand or think and they don’t have a will of their own.”

Thouvenin mainly sees the many opportunities that the new technology offers. He believes it is important that artificial intelligence applications are regulated so that the opportunities can be harnessed and the risks minimized. He and his colleagues already gave the matter some thought in a position paper published by the UZH Digital Society Initiative (DSI) in 2021 (see box). He is now working on the AI Policy Project with partners in Japan, Brazil, Australia and Israel to analyze how different legal systems are responding to the major advancements in the development of AI. The project examines countries that – like Switzerland – need to think carefully about how they want to position themselves in relation to the regulatory superpowers of the EU and US in order to promote the development of this technology while protecting their own citizens from the downsides.

The political discussions on this key topic are still in the early stages in many places, including here in Switzerland. Regulation in the EU is at the most advanced stage. In June, the draft AI Act, the world’s first legislation on artificial intelligence, was adopted by EU parliamentarians. Representatives of the member states and the EU Parliament have now agreed on the main features of the act. The EU’s AI Act focuses on the risks involved in artificial intelligence, which it divides into four categories: from unacceptable risk (this includes, for example, AI systems that can be used by law enforcement authorities to identify people in public places in real time using remote biometric identification) to low-risk applications. Chatbots such as ChatGPT would still be allowed under this legislation, but they would have to be more transparent (for example it would have to be possible to recognize deepfakes).

thouvenin

There is a danger that AI legislation will hold back the technology without resolving the problems.

Florent Thouvenin
legal scholar

Florent Thouvenin takes a critical view of the European Union’s proposal. “In its AI Act, the EU is trying to regulate the actual technology,” he says, “that requires us to first define what artificial intelligence is.” This appears to make little sense as the technology is developing rapidly and the definitions and many of the standards included in the legislation will likewise soon become obsolete. This issue already became evident during the work on the drafts of the AI Act, in which different definitions of artificial intelligence were used. Just as a definition had been agreed, along came ChatGPT and  the definition had to be completely revised. Thouvenin says there is a danger that the AI Act will hold back the development and use of the technology and will generate a great deal of administrative effort, without resolving the specific problems.

One such example is in preventing discrimination, for example in job searches. Large corporations already use AI systems for recruitment purposes. These systems can discriminate against certain people if they have been trained on data that contains bias. A well-documented example is that women are discriminated against for jobs in IT because the data on which the systems have been trained show that in the past more men than women have been hired. “This is a problem,” says Thouvenin, adding “we need to find solutions to this and to similar concrete issues.” For example a new principle could be added to data protection law under which no one may be discriminated against in a legally relevant way on the basis of their personal data. Thouvenin firmly believes that AI doesn’t mean Switzerland’s legal system needs a rethink, but that efforts need to be made to ensure that existing rules and regulations also work in this context. Some rules and laws would need to be adapted to reflect the new possibilities that AI has opened up. But for others, he believes that it is sufficient if the courts apply existing rules to the new phenomena in a sensible way.

Developing new ideas

Switzerland has yet to start analyzing the challenge of AI in depth and developing appropriate legal solutions. Many other countries are in the same situation. “Countries in other parts of the world often have a very different way of looking at the issue of AI than we do here in Europe,” says Thouvenin. He therefore believes it is helpful for the upcoming political discussions to see how different legal systems and cultures handle AI. To examine this diversity, he and his Zurich colleague Peter Picht launched the AI Policy Project in partnership with Kyoto University in Japan. A small network has since developed that also includes researchers from Australia, Israel and Brazil.

“In Japan, for instance, people see AI very differently to how we do,” says Thouvenin, “above all, the Japanese have pinned high hopes on the technology.” And as opposed to Europe, there the discussion about AI revolves much less around the individual and much more around the collective. The legal scholar realized this when he discussed the risk of manipulation by artificial intelligence with his Japanese colleagues.  “For us, in this regard it’s more about the individual and their autonomy of thought and action,” says Thouvenin, “and we have a real problem if that is restricted.” In Japan, the individual’s autonomy is less crucial and people would find the manipulation of citizens entirely reasonable if it benefited the whole of society. For example, when people are digitally nudged in that they are given certain information that steers their behavior in a desired direction. Thouvenin firmly believes that such a view of AI – that is so unfamiliar to us – can also enhance the debate in Switzerland. “A global perspective can make it easier for us to better gauge our scope for action, and it can help us develop new and interesting ideas on dealing with AI.”

The researchers in the AI Policy Project are currently developing a website to compile approaches and ideas that are under consideration in the participating countries. The positions of other countries are to be added to the website in the future. The aim of the platform is to stimulate regulatory discussions internationally and to support decision-makers in policy-making, business and associations to address the issue in a nuanced and informed way. This also applies in Switzerland, where the Federal Administration is developing a policy blueprint and will highlight the action required and potential measures by the end of next year.

Weiterführende Informationen

More information