Navigation auf uzh.ch

Suche

UZH News

Artificial Intelligence

Cuddling a Robot Seal

Humans and machines have a long history of co-existence, but artificial intelligence (AI) threatens to disrupt this delicate balance. Will machines become more intelligent than we are? Will they ultimately take over and enslave us?
Thomas Gull
The robot seal has been shown to increase the wellbeing of the people who pet it.


Perhaps you have an intelligent vacuum cleaner at home that can independently navigate around your apartment and even avoid obstacles. Some of these cute robot vacuums are relatively simple: they just drive around on the floor in a preprogrammed pattern. Others are more intelligent and can process and react to more information in their environment. Some are even capable of making a map of the apartment and behaving accordingly. Yulia Sandamirskaya is a neuroinformatics researcher who works on developing intelligent machines like these – machines that can learn and navigate in unfamiliar environments. Based at the UZH/ETH Institute of Neuroinformatics, Sandamirskaya is developing the intelligent machines of the future that could fundamentally change the relationship between humans and machines.

Machines are familiar territory for us. We invented them to relieve us of work, to take on tasks that are often tedious, hard or boring. Tasks that machines are better positioned to perform, because they are stronger or don’t get tired. We have long lived side by side with machines: They replaced horses, drive us around, transport our goods, lift heavy loads, perform repetitive tasks as robots in factories – something that humans still (have to) do.

In the beginning, machines acted as technical aids that could perform very concrete, not terribly complex tasks. Digital technology is now changing this. Suddenly it has become possible to build “intelligent” machines, defined as machines that imitate specific characteristics of human intelligence. Intelligence was first thought of in terms of abstract thinking. “Like solving math problems,” explains Yulia Sandamirskaya. Computer scientists then developed digital systems that could perform these tasks. At some point, they became better than people: in 1996, a chess computer named Deep Blue famously beat chess world champion Garry Kasparov. Deep Blue’s victory caused a stir: Here was a machine that triumphed over humans in their own domain of logical thinking. 

Smarter than Kasparov?

Looking back on the event, Yulia Sandamirskaya is less enthusiastic about Deep Blue’s win than people were at the time. “Was Deep Blue really more intelligent than Kasparov?” she asks, going on to answer her own question. “Today we would say not really, as the computer cannot, for example, move the pieces on the chess board itself.” 

In addition to planning chess moves, Kasparov could do a multitude of things that Deep Blue simply could not, such as seeing the pieces, grabbing them and moving them to another square. And at the end of the encounter, Kasparov could stand up and leave. These capabilities are known as embodied intelligence, something possessed not only by humans but also all animals – even those with comparably simple nervous systems such as bees, creatures that undertake long, winding flights to collect nectar before embarking on a direct route back to the hive. “Despite artificial intelligence and machine learning, our progress with embodied intelligence is not much further along than it was in 1996,” says Sandamirskaya. One example: Currently the computer program AlphaGo can beat the world's top player in Go, a complex strategic board game. However, the computer is still not capable of moving the pieces independently. Sandamirskaya is one of the scientists working on changing this. She wants to build robots that can independently react to and navigate within their environments. 

The key to unlocking this capability is using neural networks to control robots. These networks imitate processes in the human brain, where nerve cells take in information from all the senses. Once the stimuli reach a critical mass, the cells emit a signal, also known as a spike. “Every neuron cell has an average of 10,000 connections with other cells,” explains Sandamirskaya. They communicate with one another via spikes. Scientists can recreate this principle today using electronic circuits that behave similarly to neurons. They react to input in a flexible and analog way, sending out a signal once the electrical current has reached a certain level. 

Imitating the brain

The circuitry principle was developed in the 1990s and is now poised to revolutionize computer technology. Traditional computers operate in a binary fashion: there are only zeros and ones but nothing in-between. However, with neuromorphic machines based on electronic circuitry, the signals are analog – that is, infinitely variable. What’s more, this circuit architecture can process input in parallel with a combined processor and memory. Both functions are carried out by the connections between the artificial neurons, also known as synapses. In a conventional computer, the processor has to retrieve information from the memory for every computation, processing all input sequentially. This back and forth consumes most of the energy used by computers today. 

Currently, artificial neural networks can perform millions of calculations per second, far surpassing the efficiency of a conventional computer. The transistors used in this neural architecture are just several nanometers in size and can be combined in very dense networks. Thanks to this quantum leap in technology, it has become possible to imitate certain human brain processes. Neural networks can now be trained to recognize objects, for instance. 

Making robot learning like child's play

This kind of machine learning is referred to as deep learning. It enables a system to learn, for instance, how to recognize an object such as a pedestrian or a vehicle. It works like this: The network is shown millions of images of pedestrians. During this process, the network is adjusted using learning software so that it is able to correctly identify all pedestrians in the end.

According to Yulia Sandamirskaya, this process is still unbelievably tedious compared to how humans learn. “The system can be trained to recognize a pedestrian in the summer,” she says. However, if the machine has never seen a pedestrian wearing a long coat, it may not recognize the pedestrian as such. Then the training process starts anew with a new batch of images. “Children learn in a totally different way,” explains Sandamirskaya. “You don’t have to show a child one million cat pictures and constantly repeat: this is a cat, this is a cat, this is a cat. It’s usually enough to see a cat once to know what a cat is.”

Neuroinformatics researchers dream of creating a system for which learning is like child’s play. However, this dream is far from becoming reality. “Our neurons may be slower than digital ones, but our networks are more flexible and adaptable,” clarifies Sandamirskaya. Image recognition, however, is one of the areas in which artificial neural networks do very well, often outperforming humans. Language recognition, on the other hand, is much more challenging, as language is complex and meanings are often unclear. An even more difficult task: Programming a complex robotic system so that it can safely and effectively perform tasks in an adaptive way in everyday situations. 

Robots inspired by biology

Sandamirskaya, a native of Belarus, is the head of the Neuromorphic Cognitive Robots group at the INI. Her team is working on the frontiers of what neural architectures are capable of today. Sandamirskaya wants to understand how biological processes such as learning unfold in the brain and then recreate them using neural networks. She develops programs for robots that allow them to navigate independently in unfamiliar environments. First the robots must be able to recognize their environment and then react to it appropriately. “This is very complex,” she says. “Just lifting my iPhone off the table and using a hand to grab it is a task that overwhelms robots even today.” Robots, or the intelligent systems that control them, need to learn three things in order to perform this task. First, they need to see the iPhone and be able to judge its size, weight distribution and surface properties. Then, they must be able to react flexibly to deviations from their original judgments or to unexpected occurrences such as the iPhone moving before it reaches the robot’s hand. And finally, explains Sandamirskaya, “They have to be able to learn from their mistakes, as it is not possible to program absolutely everything.” 

Her goal is to develop neural networks that can do all three of these things: recognizing objects, controlling movements and learning independently. “Today they can perform certain tasks, but performing all of them in coordination is not yet possible.” Yulia Sandamirskaya is also working on programs that control drones. If drones could one day orient themselves like bees, this would represent great progress.

But even with artificial neural networks getting better and more agile by the day, there is another obstacle standing between them and embodied intelligence: The body itself. It is difficult to connect neural architecture with motors to generate coordinated movements tailored to the environment like humans do, with the mind and body working in coordination. “Today the brain can be imitated better than the body,” says Sandamirskaya, who explained that while biologically inspired robots are being developed, they are complicated and expensive. “A lot more money would have to be invested to make faster progress.” Even Google is interested in robotics, but progress there is much more modest compared to their software systems. “Developing intelligent robots is not the quickest way to get rich,” admits Sandamirskaya, laughing. 

While the scientist is hard at work on developing artificial intelligence to imitate human capabilities, philosopher Eva Weber-Guskar has questions about the other side of the equation: What feelings do intelligent machines trigger in humans, and are these feelings good or bad? Emotions play an important role in the interaction between people and machines, explains Weber-Guskar: “Machines should not arouse fear, anger or disgust but rather positive emotions. Otherwise, we will no longer make use of them.” Every smartphone is designed to be pleasant to hold in the hand. 

Today there are even machines that are designed to target our emotions. Take, for instance, Paro: A robot baby seal that responds to how humans interact with it. One of Paro’s uses is as a form of therapy for dementia patients. And it works: The robot seal has been shown to increase the wellbeing of the people who pet it. Here it is critical for Paro to be able to simulate emotions. “The robot needs to be able to show feelings of happiness or contentment,” says Weber-Guskar. Paro is one example of emotional artificial intelligence. Another example is Pepper, a humanoid robot that has been programmed to recognize human emotions and respond to them – for instance by dancing to cheer someone up or hugging them. 

It is important to remember that the robots themselves do not experience any emotions. They are only programmed to simulate emotionality, which can be problematic if this simulation deceives people, or if people develop “inappropriate” feelings for their robot counterparts. Weber-Guskar urges people to remain aware that these are machines without any capacity for feeling emotions. While finding them likable is acceptable, feelings of empathy or compassion can become problematic, as they are based on the false assumption that the robot in question has feelings.

However, it is not yet possible to draw clear lines in the debate on robots and emotions. Should soldiers, for example, empathize with a mine-clearing robot that gets its legs blown off? (They apparently do.) Not really, replies Weber-Guskar. We should be aware that it is a robot and therefore incapable of feeling pain. On the other hand, empathy is a positive human trait that should be cultivated rather than unlearned.

“If we wanted to, we could hold robots in slave-like conditions, rape them or taunt them, because this behavior would cause them no harm, neither physically nor emotionally,” says Weber-Guskar. However, she argues that this would ultimately be negative, as it would weaken our capacity for empathy and have negative consequences for our ability to live with other people. On the flip side, friendly and empathetic robots could help us train our capacity for friendliness.

Learning how to feel

Feeling emotions is something that we learn while growing up – through social interaction, according to Weber-Guskar. When it comes to robots, this means that although the machines themselves do not have emotions, we will develop emotions in the process of interacting with them. These emotions will then influence and change us, making it important to find an appropriate emotional framework for interacting with robots. This is one of Eva Weber-Guskar’s research topics, which she is exploring as part of UZH’s Digital Society Initiative (DSI) during her spring 2019 fellowship.

Will machines one day be capable of feeling just like people? Weber-Guskar shakes her head. The critical point is that artificial intelligence – at least at this point – does not have a consciousness. This means that robots do not know who they are. “At the moment it is not possible to imagine that machines could ever develop something like a consciousness.” This is also due to the fact that we ourselves do not understand how consciousness comes about. Weber-Guskar calls this the consciousness gap. “We can describe biological processes in the brain, the firing synapses,” she clarifies. “But we can't explain how consciousness arises out of these biological processes.” As long as this is the case, she says, it will be impossible to build a machine with its own consciousness.

Machines are constantly improving, with some becoming more intelligent. This has unleashed fears that humans may one day be replaced by machines. In the working world, this process is already underway. It is not a recent development, however, but something that has been happening since the beginning of the industrial revolution. In principle, humanity has benefited from the use of machines, because they save us a lot of work, increase our productivity and in turn raise our standard of living. In earlier times, it was primarily manual labor that was replaced by machines. “This has changed since computers became capable of independently performing cognitive tasks,” says economist David Hémous, assistant professor at the UZH Department of Economics.

Middle-class jobs in jeopardy

Algorithms are already replacing people in fields outside of manual labor. Hémous gave travel agents as an example. They used to find hotels and book flights for customers, but this is something that people can now do themselves with the help of a search engine or online booking system. Or employees at law firms who searched through documents – machines can do it better and faster. As previously mentioned, artificial neural networks can be trained to be better than humans at identifying images. This capability is making them strong competition for highly trained specialists such as radiologists, as they can be used for tasks such as identifying tumors on radiographic images.

What are the consequences of this development? Intelligent machines are increasingly replacing qualified workers, putting traditionally middle-class jobs in jeopardy. At the same time, new, well-paying jobs such as engineers and programmers are arising to meet the need for developing machines and artificial intelligence. The demand in the service sector is also growing, although these are often lower-paying jobs such as cleaning, waiting tables or dog sitting. This may lead to the widening of the income gap. “Automation increases inequality and can lead to lower wages,” says David Hémous. However, he emphasizes that automation is not the most important factor when it comes to rising unemployment. 

It would be a misconception, he says, to believe that there is a fixed amount of work to be done within an economy. Economists refer to this as the “lump of labor” fallacy. There are other factors that have more influence on whether people can find a job: education or lack thereof, an inadequately diversified economy or an inflexible labor market. Examples include areas of Northern France that have lost their industries or the Rust Belt in the United States. Switzerland has also lost many jobs in traditional industries but has been able to make up for the loss in other sectors. According to Hémous, inequality has also not yet increased here, a development that is similar to other European countries but stands in contrast to the USA.

He expects that the positives of automation will more than outweigh the negatives: “If people are replaced by robots, that is not bad news for society as a whole, because it increases productivity. That means that we will become wealthier.” The critical question is how this wealth will be distributed and what will happen to the people who lose out during this process. Here Hémous sees various options. Unconditional basic income would be one option, but here the economist warned that more needs to be known about the effects. More progressive taxation that puts increased pressure on high earners would be another. A third way could be credits for low earners, a system that already exists in France: the prime pour l'emploigives an allowance to those who work but still earn too little. The most important factor according to Hémous is education. “People have to acquire the skills that will be in demand in the future,” he says. He adds that Switzerland has done well in this area thanks to its vocational training system and is also well positioned for the future.

Fear of the vacuum cleaner

On the whole, David Hémous paints a positive picture of the future relationship between humans and machines – as long as we have the machines in our service and distribute productivity gains fairly. However, the growth of artificial intelligence has given rise to a new fear: The idea that machines could one day take control and dominate humanity. When we asked Yulia Sandamirskaya if this is pure science fiction or something that we should truly fear, she laughed and retorted, “How afraid should I be of my vacuum cleaner?”

According to Sandamirskaya, the correct answer would be not at all, and this also applies to artificial intelligence in general. “Algorithms and intelligent networks will act like conventional machines, reducing our workload by taking on tasks that we can't perform well, or ones that are boring, dirty or dangerous,” she says. “This is good for us.” And if these networks suddenly start to think and make decisions? Sandamirskaya brushed off the possibility, saying that while intelligent systems can carry out tasks and even learn independently, it is still humans in the end who set the goals and the framework: “The idea of computers thinking and deciding independently just like people is utopian!” 

As things currently stand, machines cannot become like humans because they lack consciousness. They are therefore also not capable of independently distinguishing between good and evil, says Weber-Guskar. “Machines are not moral creatures with free will like us humans,” explains the philosopher. In Weber-Guskar’s estimation, this is a good thing, as training machines to act in a moral capacity would mean giving them the freedom to decide if they want to be good or evil. In order to do so, she says, we would have to allow machines to independently change their ultimate purpose. “This is something we should avoid,” she warned. “Because then they could turn against us.”