Navigation auf uzh.ch

Suche

UZH News

Digitalization

AI Needs to Become More Human

In the future, we will be working with AI just like we work with humans. Ideally, this will involve combining the capabilities of human and artificial intelligence to create something new. But in order for this to work, AI needs to become more human.
Thomas Gull; English translation by Gena Olson
For robots and humans to complement each other well, humans and robots must be able to interact as naturally as possible. (Illustration: Noyau)

The brave new future of work has already arrived in the lab of Anand van Zelderen. Johanna and Johan are seated at the table, cheerfully working together with flesh-and-blood humans. Johan and Johanna are AI avatars that look like humans and can react in human-like ways, albeit with some delays. “If you talk to Johanna, she'll respond but needs two or three seconds to do so,” explains van Zelderen, a management researcher who is part of the Digital Society Initiative (see box).

His Spatial Computing Lab is part of the Center for the Leadership in the Future of Work at UZH. Van Zelderen believes that many of us will work with avatars like Johanna and Johan in the future. This led him to conduct experiments to see how we react to having AI-powered “co-workers”. His findings? If they have human traits, we are more inclined to show them appreciation and value their contributions to the team compared to a robot or traditional generative AI such as ChatGPT.

Integrated and open virtual worlds

This means that the more AI resembles humans visually and behaviorally, the easier it becomes for us to trust and cooperate with it. “If we want to successfully integrate AI into everyday work, we need to create virtual work environments where people can interact with AI in the most natural way possible,” explains van Zelderen.

Companies have already started to offer these environments – for instance, the Metaverse, launched by Meta (formerly Facebook). But according to van Zelderen, the Metaverse is unattractive in its current form and therefore rejected by employees. This prompted him to establish the Openverse Initiative, which is now a global operation encompassing 25 academic institutions. The goal of the Openverse is to offer integrative, open and ethically responsible virtual worlds. Unlike commercial development projects, the digital resources that make collaborative AI design possible are made freely available to members of the Openverse community, who can use them for research and teaching purposes. “Virtual environments like the kind we are designing with Openverse have the potential to completely change how we work,” says van Zelderen.

If we want to successfully integrate AI into everyday work, we need to create virtual work environments where people can interact with AI in the most natural way possible.

Anand van Zelderen

How is AI changing our work, and what does this mean for us? The AI revolution means that cognitive labor can now be done by computers, a transformation that is already partially underway. It remains to be seen what AI is truly capable of and which human tasks it can successfully take over. “In the past, new technologies have taken over tasks that were well-structured and clearly defined,” says computer science professor Abraham Bernstein. “The question is whether AI can now also handle more complex work.”

When AI hallucinates

Bernstein does not believe that this is yet the case. “Today, human collaboration with generative AI like ChatGPT is more like a boring ping-pong of prompts and responses,” he says. “If I’m not satisfied with the answer, I reformulate the prompt.” Bernstein also points out that the AI algorithm is a black box where it is unclear how the results were reached. AI produces a “probabilistic recombination of what exists,” he says, where the computer spits out what appears to be the most likely answer. However, this does not mean that the output is good, correct or true. “Hallucinations” or “confabulations” are the terms for when AI delivers false results. The challenge for users is to notice when AI is hallucinating – not an easy task, considering that AI generates consistently convincing results even for incorrect output.

When AI hallucinates, this can have real-life consequences. Take, for example, a recent incident involving a chatbot used by Air Canada: the bot told customers that the airline offers a discount on flights to attend a relative’s funeral and that the discount can be claimed retroactively. When a customer did precisely that, they were informed that the discount can only be granted in advance. The customer took the position that the chatbot had misinformed them, and the court ruled in their favor.

Cases like these demonstrate that these programs are often not yet reliable enough. According to Bernstein, the future development of AI in the workplace depends largely on how reliable the programs are. In his view, the limited reliability of AI has consequences for how it can be used and what role it can play in our education system: “We still need to be able to judge whether the results delivered by AI are any good,” he says. “So we’ll still have to continue learning things in the future that we’ll never really use as such, since machines can do them better and faster – just like doing calculations in your head today.” Even though we use calculators to do math, we still need basic mathematical knowledge, for instance to judge the magnitude of figures.

We can think better

One of the dystopian visions associated with AI is the idea that it will make human work largely superfluous. Bernstein sees this as an unrealistic scenario. Occupations consist of task bundles, he says – only some of which can be taken over by AI. He predicts that we will delegate part of our work to machines and turn our attention to other tasks instead. For example, scientists will automate certain experiments while coordinating and collaborating with various AI systems.

Human intelligence will still be needed in the future. Bernstein says that humans can still think better than AI, as we combine reactive and reflective thinking. “The AI we use today primarily thinks in a reactive manner,” he says. Reactive thinking is understood as reacting quickly and immediately to stimuli or situations without needing to think too long. AI systems function reactively by generating responses based on probabilities or previously learned patterns. What this type of AI cannot do is reflect and critically question its own results. There are now neurosymbolic AI systems that combine these two ways of thinking. However, according to Bernstein, humans are still superior to these machines: our way of thinking combines the two types of thinking better, which gives us certain advantages over computers.

Our way of thinking combines reactive and reflective thinking better, which gives us certain advantages over computers

Abraham Bernstein

The entry of AI into the workplace will not make human work obsolete, he says, but it does mean that humans and AI “will have to find the right relationship.” If we do this in a savvy way, we can benefit from the abilities of these intelligent machines. For Bernstein, the ideal use of AI involves a form of co-creation where humans and machines work together to achieve better results. This involves combining the strengths of both parties: the inexhaustible stamina and computing power of the machine and the analytical thinking, knowledge and intuition of the human.

Don’t get too comfortable

While this all sounds very promising, van Zelderen’s findings serve as a warning about the downsides of AI. “We need to make sure that we don’t get too comfortable and become dependent on our AI colleagues,” he says. As his study shows, people are less engaged when working with AI and less satisfied with their work. He also considers it unhealthy to spend the entire day in a virtual working environment. “The ideal scenario would be a mix of reality and virtual elements,” he says. The biggest challenge for the future is designing AI in such a way that people feel comfortable using it.

Van Zelderen is convinced that AI needs to become more human to achieve this, for example by giving it a human face and showing human-like behavior, as he did in his experiment. At the same time, researchers should no longer view generative AI merely as a tool, but also as a kind of co-worker, since the interactions between employees and AI resemble those with real humans in many ways. “If we succeed in creating a harmonious collaboration between AI and humans, these new technologies will support rather than undermine human potential,” says Bernstein.