Christoph Graber, what is artificial intelligence?
One of the key conclusions from our workshop was that there is no universally recognized definition of artificial intelligence – or AI for short. AI is currently often used to refer to a form of machine processing of data using algorithms. But the participants at the workshop all agreed that the term artificial intelligence wasn’t strictly speaking the right expression for this. The word intelligence evokes the wrong connotations. Intelligence refers to an achievement of the brain, or human consciousness, and this also involves assigning meaning to the world as we perceive it. Machines might be able to perform certain tasks better and faster than people, but they follow orders and can’t reproduce meaning. That’s also why expressions such as machine learning are misleading. Learning is also an achievement of the brain.
Where did the term artificial intelligence originate?
It first emerged in Silicon Valley in the 1950s. People back then also claimed that humans would soon be eclipsed by machines and their skills. When technological progress didn’t happen as had been hoped, the term all but disappeared from public debate in the 1980s. It then resurfaced only about 15 years ago, and we’ve been experiencing an outright AI hype ever since. This raises a number of questions: Why the hype? Why now?
Because there’s been considerable technological progress in the meantime?
That may be, but how something is referred to always also has a political effect. In this respect, we have to ask whether a term also has an ideological connotation: Does somebody have an interest in coining a certain term? Artificial intelligence also means business. People who make money with AI could thus be interested in having such a hype around artificial intelligence. This hype could be something like: AI serves the common good of humankind and can carry out tedious tasks for us. And while this is true, the hype has now got so big that for example some have even created a new religious group called Way of the Future. Its followers worship artificial intelligence as a god-like being. As a sociologist of law, this reminds me of Karl Marx, who said that material things were sometimes awarded god-like status – Marx referred to this as commodity fetishism.
So our society’s enthusiasm for AI has gone over the top?
There are without a doubt areas where artificial intelligence makes sense. AI can perform tasks for us and expand our knowledge. But we mustn’t forget that there are also developments that aren’t as welcome. When algorithms search interesting content online for us, we run the risk of getting caught in a filter bubble. Studies also show that algorithms can reinforce existing prejudices in society. For example, the prejudice in the USA that black people are more often delinquent than white people is being perpetuated by algorithms. Precisely because algorithms don’t think, but recognize and reproduce existing patterns, including prejudicial ones.
As a sociologist of law, do you see any alarming AI developments in the area of law?
Yes. In the USA, certain politicians are currently discussing whether algorithms could replace experts, for example when it comes to predicting whether a delinquent will reoffend. Some go as far as saying algorithms could even completely replace court trials and thus save costs. This is where I must urge caution. A trial can’t be delegated to a machine. Law isn’t a formal, closed logic, but is made up of terms that need to be interpreted. This is why every case has to be treated as a unique case – otherwise it won’t be fair. Machines are unable to take into account the complexities of life, simply because they lack consciousness.
Now that the workshop’s over, what are some of the next steps in research in terms of artificial intelligence?
Artificial intelligence will radiate into all aspects of society. It’s therefore important that science adopts an interdisciplinary approach to the topic. We have to overcome the disciplines’ limitations and develop a common language. In a next step, we’ll define questions to be explored in joint research projects. It’s also important to increase the extent to which topics such as artificial intelligence and other technological developments are covered in the education of lawyers. They’re guaranteed to be confronted with them in their professional lives.
Christoph Graber is professor of sociology of law with a special focus on media law at the University of Zurich. His teaching areas include sociology of law and legal theory, internet and media law, as well as art and cultural law. His research focus is on the normative effects of new technologies on the internet.
The workshop Philosophical Questions about AI, Law and Governance was organized by Christoph Graber’s team in collaboration with law scholar Urs Gasser. Gasser is a Director at the Berkman Klein Center for Internet and Society, an interdisciplinary research center at Harvard University, where he researches legal and societal issues surrounding internet technology. One of the center’s research projects explores the ethics and governance of artificial intelligence.
The editorial team reserves the right to not publish comments. We will not publish anonymous, defamatory, racist, sexist, otherwise prejudiced, or irrelevant comments. UZH News will also not publish comments with advertising content.