Communication and Media Research

Online Pinocchios

The internet and social media have become a battleground for the war between facts and fake news. How do we separate fact from fiction? And to what extent should operators of social media platforms and search engines be held responsible?

Thomas Gull

With Pinocchio, the lie was still easy to distinguish from the truth. (Image: iStock, emarto)


Once upon a time, though not so long ago – people of my age will remember it well – the quantity of information available to us was manageable. In the morning, we would read the newspaper, in the evening we watched the news, and in between we listened to the radio and chatted with others at home, at work or over a drink. The information we acquired in this way had one thing in common: We knew where it came from.

The arrival of the internet and social media changed all that. The amount of information now available to us is overwhelming and its origin not always clear. We are now challenged to manage this abundance and avoid falling prey to misinformation: “The danger is that we are manipulated without realizing it,” says communications researcher Juliane Lischka, senior teaching and research assistant at the Department of Communication and Media Research (IKMZ) at UZH. Because in order to evaluate and interpret information, it is important to know who is behind it. But that’s not so easy. Unlike with established media, many web users are not at all inclined to reveal their identity, and the online world is brimming with misinformation and awash with conspiracy theories.

The simple reason for this is that nowadays anyone and everyone can publish content, and the traditional media’s role of verifying and weighting information has much less influence. This can be a good thing, because the media then loses some of its power, such as the power to withhold information and to interpret it according to its own agenda. On the other hand, it opens the door to all those deliberately seeking to spread false information. The kicker is: “Myths and fake news sometimes spread faster than facts,” says communications researcher Sabrina Heike Kessler. “People click on them more frequently because they are often simplistic, sensational and arouse curiosity.”

The consequences can be dangerous, if not fatal. One example is the myth that vaccines cause autism. “It has been refuted many times over,” says Kessler, who works as a senior teaching and research assistant at IKMZ. “But the claim is still circulating on the internet and is believed.” So we might see heartbreaking videos of children who apparently became autistic following a vaccination. Then there are pseudoscientific talk shows on which doctors claim that non-vaccinated children are healthier.

As a result, parents stop vaccinating their children, which may in turn lead to them catching measles and consequently developing pneumonia or meningitis. At the same time, this undermines efforts to eradicate measles. “This costs a lot of money and endangers lives,” says Kessler.

No time off

Because myths and fake news attract attention and spread quickly, it is difficult to debunk them. However, there are now guidelines available to help anyone wishing to do so. The two most important tips are firstly to present facts and secondly to avoid mentioning the myth at all if possible, as this helps it to spread and grow stronger. One way in which untruths and misinformation spread like wildfire through the web is through the existence of little helpers that never get tired, never take time off and cost nothing at all: Bots. Bots are computer programs that are used on the internet to complete specific tasks such as systematically posting, liking or retweeting vaccination myths. They therefore function like a loudspeaker, amplifying and often distorting messages. “Bots can be used to create the impression that an opinion is highly popular because it appears to be shared by many users,” says Tobias Keller, teaching and research assistant at IKMZ. “But they are not real people, just computer programs.”

Studies assume that bots make up around 10 to 15 percent of Twitter accounts. However: “Behind every bot is a person, so if you write to a bot, it is often a person who replies.” One person can control hundreds of bots. Bots are therefore a powerful tool for spreading messages on the internet and influencing opinions. Probably the most well-known example is when the Russians used bots during the last presidential election in the USA to fuel differences of opinion between the political camps.

And this gets to the heart of the matter: The internet and social media provide a new battleground where the different parties compete for ultimate authority – using all means at their disposal. The most prominent such party is Donald Trump, who uses Twitter to govern as well as steer world politics. Although this makes him vulnerable to attack, and even appear ridiculous, Juliane Lischka adds: “He can speak directly to the people and come across as authentic.” Trump also uses tweets to discredit traditional media who are usually critical of him. This is also part of the battle for ultimate authority. Trump tries to delegitimize leading media players to achieve credibility for himself, allowing him to define what the truth is, what can be said and what cannot be said.

Non-stop lies

Trump constantly accuses the mainstream media of spreading fake news. And yet he is the busiest Pinocchio of all: According to the Washington Post, the number of false statements and claims he has made since taking office passed the 10,000 mark in April. Trump is also a proponent of one of the most perfidious political conspiracy theories, the “birther” conspiracy, which claims that Barack Obama was born outside the USA and was therefore not eligible to be president of the country. Accusing the liberal media of spreading fake news and misinformation was not, however, one of Trump’s inventions.

“The Republicans have long been accusing the mainstream media in the US of distorting the facts,” says Lischka. “But studies show that the reporting is balanced.” The American example forms a precedent worldwide. In Europe, right-wing parties in particular also complain that media coverage is one-sided. In Switzerland, the SVP regularly criticizes Swiss broadcasting company SRG. Most recently, SVP President Albert Rösti accused the SRG of “climate change propaganda.”

The chaos reigning on the internet, where anyone can claim and deny everything, means that the onus is on us as users, first and foremost, to make sense of it: “We must be able to assess what is true and what is false, and we need to acquire the skills to do this,” says Sabrina Kessler. We must ask ourselves: Who wrote this, what is their agenda, where do their interests lie? Children should be taught in school to be critical in their handling of information. They have to understand that not everything they see on the web is necessarily true. “And,” says Tobias Keller, “parents should discuss with their children what they come across on the internet.”

Fortunately, we don’t all take everything at face value. Tobias Keller was involved in one study that showed that social media users still pay attention to the quality of an information source. The study investigated the willingness of users to read content recommended via social media. Their willingness was influenced by whether the content was recommended by someone they knew but also by the quality of the content’s source. They were significantly more willing to read content from quality newspapers than from tabloids. Keller nevertheless argues: “The fact that we look to what our friends think indicates that we like to live in an echo chamber surrounded by news that corresponds to our beliefs.”

Search engines like Google and social media like Facebook exert an enormous influence on what we as users view and consume on the internet. With this tremendous power comes responsibility though, says Juliane Lischka, but operators of these platforms are not yet accepting it fully: “They say that they are not responsible for the content because they didn’t create it.” The communications researcher disagrees: “They structure and order the content into a hierarchy and select what they present to us.

They are therefore providers of information, even if they don’t see themselves in this way.” And this has consequences: Facebook and other platforms must assume responsibility for the content that is delivered via their channel. “They must commit themselves to the values of journalism and establish criteria by which the content on their platforms can be measured,” says Lischka.

Developments are moving in this direction. In Europe in particular, Facebook is coming under pressure to delete false and harmful information. In France, for example, an anti-fake-news law stipulates that Facebook must remove false information disseminated on political topics prior to elections within a short period of time. “Facebook has actually called for politicians to establish binding rules,” says Lischka, “then it can program its algorithms accordingly and say, see, we are doing as the law says.”

Control is bad for business

In some respects, it is in the interests of platform operators to remove problematical content, says Tobias Keller, because such content damages their reputation. “On the other hand, they profit from emotionally charged subjects that cause a stir. That generates traffic, advertising exposure and money.” In other words, too much control is bad for business because it costs money, and blocking people means you lose customers. Donald Trump provides a bizarre example here too. He called the CEO of Twitter to the White House to complain that the online platform had removed some of his followers. However, according to Twitter, these weren’t real users but bots, fake accounts and propagandists.

How can the tide of misinformation be stemmed? Since the traditional media, which continues to provide guidance, are increasingly being drained dry, Juliane Lischka believes a public service could provide the solution – a “publicly financed media that can be trusted”, as she puts it. And scientists? Don’t they have some sort of duty? Who, if not they, can help debunk health myths or confusing claims about climate change? Sabrina Kessler and Tobias Keller agree that scientific and academic communication must tread new paths “for example by communicating their research findings via social media”.

Thomas Gull, Editor UZH Magazine, English translation by Andrea Hurley

Write Comment

Die Redaktion behält sich vor, Kommentare nicht zu publizieren. Unberücksichtigt bleiben insbesondere anonyme, ehrverletzende, rassistische, sexistische, unsachliche oder themenfremde Kommentare sowie Beiträge mit Werbeinhalten.

Number of remaining characters: 1000