Navigation auf uzh.ch

Suche

UZH News

Media change

“Selling our data soul”

Algorithms accompany our every click on the internet. Facebook and Google use them to analyze our online behavior. Communications expert Michael Latzer researches what algorithms do and how they shape our view of the world.
Interview: Thomas Gull and Roger Nickl
Michael Latzer
Michael Latzer: “Online echo chambers aren’t prisons: We step into them voluntarily – and it’s sometimes good to hear others back up your opinion.”


Michael Latzer, what are algorithms?

Michael Latzer: Algorithms are programs that solve problems. The algorithms we’re interested in are those that are used by internet companies to select data and determine their relevance. By doing this, algorithms influence the way we construct our reality on a daily basis as well as our actions.

That sounds fairly abstract. Could you give an example?

Latzer: Let’s take Amazon. The online giant knows what I’ve bought in the past and uses this knowledge to provide me with suggestions for other, similar products. These suggestions are based on an algorithm that analyzes my purchasing habits and matches them with Amazon’s products. It’s a recommendation system controlled by algorithms. Facebook uses a filtering system to show me a selection of my friends’ status updates. Spam filters too are based on algorithmic selection that separates the wanted from the unwanted messages. The thing that all these systems have in common is that they select products for us, assign relevance to them, and thus systematically present us with a part of what’s available on the internet.

Is that problematic?

Latzer: It depends on how the algorithms are programmed, or in other words their purpose. A book recommendation for example is quite harmless, where the worst case scenario is that you buy a book that you’ll find boring. But when algorithms are used to predict our behavior based on traces we leave online, that’s when it gets tricky. There are harmless examples here too, such as when you look for hotels on Sardinia and then get online ads for hotels on Sardinia. But algorithms are already being used for entirely different purposes.  There are programs that help judges estimate the likelihood that a person accused of a crime will reoffend. This is a very controversial topic. Or there are instruments to evaluate a person’s creditworthiness. And China has a social credit system that ranks its citizens using behavioral data collected online and elsewhere.

Which means citizens are effectively transparent?

Latzer: Exactly. And this has consequences, since the state then knows practically everything about its citizens and can use this knowledge to evaluate, and even punish or reward, them according to whether their behavior is deemed desirable. Algorithms are also used in hiring processes to for example predict candidates’ future performance with the help of available data.

That’s an alarming outlook.

Latzer: It’s important to note that it’s not the algorithms that are responsible for this, but those who program and use them. As with other technologies, algorithm-based selection can also be abused to advance questionable interests and values.

This is why more and more people are calling for transparency when it comes to algorithms, in other words that users should be made aware of how algorithms are programmed, for example in search engines.

Latzer: But then there’s a transparency paradox: If you make algorithms transparent, the system no longer works. For example, if I know how a search engine works, I can manipulate it so that my page will top the list of search results. People are already trying to do this, with varying success, and it’s one of the reasons why algorithms, for example those used in search engines, are constantly being changed and adapted. This is why a certification or testing service for algorithms, as it’s being discussed in Germany, makes no sense and isn’t feasible.

So we simply have to live with this lack of transparency?

Latzer: There are different manifestations of transparency. If you drink a coke, you don’t know its exact recipe, but you can check the bottle to see which ingredients have been used. So if specific algorithms have been used, this could also be made transparent. For example, which data is used to predict the likelihood that a delinquent will reoffend? If it’s the postcode in his address, this can be discriminating against people who live in a neighborhood that has a bad reputation. In other words, there are certain cases where it can be important to get at least some information about how an algorithm is put together.

Most people nowadays are aware that we leave traces whenever we’re online, but most of us seem more or less indifferent about this. Is that the case?

Latzer: Well, with Google and Facebook we make a deal with the devil and offer our data soul in exchange for digital convenience, as US economist Shoshana Zuboff puts it: We make ourselves transparent in return for certain advantages.

And which are those?

Latzer: I can search the internet for free, keep in touch with my friends, or get useful recommendations. There are many positive aspects. We actually need the help of algorithms, which make a preselection for us, otherwise we’d drown in a sea of data.

It’s precisely this preselection that’s criticized, since it allows us to be manipulated. What’s your view?

Latzer: It’s a complex issue. The fact of the matter is that algorithms make sure we get personalized information when we’re online.

Is that a good or a bad thing?

Latzer: Some people claim that it’s bad for diversity and puts us in echo chambers and filter bubbles that give us the illusion of a world that doesn’t actually exist. I think that’s a bit overstated. Echo chambers aren’t prisons: We step into them voluntarily – it’s sometimes good to hear others back up your opinions, and we can also leave the echo chamber whenever we want, the same goes for the social media bubble. According to recent surveys carried out in Switzerland, the vast majority of people today still get their information from various sources and place much more trust in traditional sources.

Which ones?

Latzer: One of the questions we asked was about the importance of individual sources of information when it comes to forming a political opinion. For people in Switzerland, the most important source is face-to-face conversations, followed by the government’s voting pamphlet, traditional broadcasters, print media and their websites, Wikipedia, and search engines. In comparison social media are only assigned very little importance. So while social media are used by more than half of the population, and quite avidly so by some, they’re still only one of many sources and not assigned that much importance when it comes to people forming an opinion about political issues. Put differently, usage time does not equate to importance. 

And yet, social media are being used in commercial and political advertising to manipulate consumers and voters. A prominent example is Russia’s attempt to influence the US presidential elections in favor of Donald Trump using targeted campaigns on social media, among other things.

Latzer: We refer to this as microtargeting, which targets individual users through content that is tailored to their interests. There are claims that this makes it possible to manipulate people’s opinions. I believe that’s an exaggeration. Researchers in the US and Europe have found that the extent of disinformation as well as its manipulative effects are overestimated. With Trump, I also think the suggested effects model is wrong: Trump wasn’t elected president due to elaborate social media campaigns, but above all thanks to traditional media, including television, which had enabled him to build up a prominent media presence with his show The Apprentice, which he could then convert into votes.

But the internet still changes the way we get information. Traditional media are losing more and more ground.

Latzer: That’s true, albeit only to a certain extent. 

Why?

Latzer: May I elaborate a bit?

Please do!

Latzer: Starting with Niklas Luhmann’s The Reality of the Mass Mediapublished in the 1990s, communications scholars argued that almost all of our knowledge of the world is selected and shaped by mass media. They were the undeniable, dominant gatekeepers, in other words they decided what was or wasn’t important and provided us with information accordingly. This virtually monopolistic gatekeeper role of traditional media has now been broken up in two ways: By internet users, who act as gatekeepers through content that they themselves generate, and by automatically-selecting algorithms in various online services such as search engines, social media or news aggregators. Today you and I or algorithms can bypass the mass media and create publicity by posting something on social media, either ourselves or through an algorithmic social bot. Thanks to the internet, the number of information providers and gatekeepers has grown considerably, and the power of traditional mass media has thus diminished. 

What does this mean for us as users?

Latzer: The range of information has become more diverse and we have to do more and more selecting. This is where search engines, social media, recommendation and evaluation services can be of use. But we have to learn to assess them. Our research shows that people are very good at making distinctions about the trustworthiness of a source. Even young people are much more thorough than previously assumed. But we still have to develop our online skills. 

Why?

Latzer: In some ways we’re still a bit naive. For example, only 20 percent of Swiss internet users know that Facebook uses algorithms to put together their newsfeeds, whereas 40 percent believe this is done by a human and a further 40 percent aren’t sure. To me this shows that there’s still too little of awareness of how algorithms work and what effects they have.

That’s something to improve, but how? 

Latzer: People should be better informed, for example through campaigns. We should also learn how to make better use of the internet, especially when it comes to making good and efficient use of search engines. In my view many of these services have the potential to be used in a positive way, and users should therefore be put in a position to make sensible use of them and at the same time protect themselves against risks.

Algorithms are already very powerful because they control the flow of information on the internet and thus also our perception. Will they soon take over and control our behavior too?

Latzer: The idea behind this is technological singularity, which refers to a future in which artificial intelligence will have overtaken our human intelligence, like a digital sorcerer’s apprentice who becomes uncontrollable and irreversible. This idea was made popular by Ray Kurzweil, among others, who now works as Director of Engineering at Google. I won’t rule out that this could happen one day, but as things stand I believe it’s highly unrealistic.

Why?

Latzer: Because computers are purely syntactic machines rather than semantic ones. Recently hyped advances in artificial intelligence (AI), such as those in machine learning, have to do with “narrow AI” that applies to some very narrowly defined tasks rather than “general AI”, which approximates the comprehensive capabilities of human beings.

What does that mean?

Latzer: Even today algorithms can do many things better and more tirelessly than humans, for example calculating, memorizing, matching, combining and performing various routine tasks. But unlike humans, they can’t understand sense or grasp meaning. At best, such understanding can only be simulated thanks to extensive computing power. Algorithms are not conscious of their actions, and will not develop consciousness in the foreseeable future. That’s why I don’t see people being replaced by algorithms anytime soon. The value of artificial intelligence lies above all in the new possibilities that come with a capacity for action shared by humans and technology.