Navigation auf uzh.ch

Suche

UZH News

Us and ChatGPT

“Like a Swiss army knife”

ChatGPT is overrated as artificial intelligence and underrated as a language model, linguist Noah Bubenhofer says. He, philosopher Hans-Johann Glock, and computational linguist Rico Sennrich discuss how chatbots could change science, universities, and everyday work in the interview below.
Roger Nickl, Stefan Stöcklin; Translation: Mark Rabinowitz
Rico Sennrich, Hans-Johann Glock, Noah Bubenhofer
Will books soon have outlived their usefulness? Noah Bubenhofer, Hans-Johann Glock and Rico Sennrich in the library of the Department of German and Scandinavian Studies. (Photo: Stefan Walter)

The world at the moment is amazed at and abuzz about the possibilities and dangers that chatbots present. Do you use ChatGPT and the like in your daily routine?
Rico Sennrich: I research chatbots and test their capabilities. We are studying how different prompts affect output texts, such as translations for instance, in controlled experiments. We critically examine the limitations of the generative large language models that underly chatbots. But, surprisingly, I still rarely use chatbots in my everyday routine.

Noah Bubenhofer: I already use ChatGPT quite a lot, in part for research purposes to understand what’s possible, but also for practical reasons. I recently let ChatGPT draft the abstract for a paper I wrote. It did it very well. In another instance, I asked the chatbot to compose a polite turndown of an invitation to a conference that I couldn’t attend. Very useful text came out that time as well.

Hans-Johann Glock: I use ChatGPT less to compose texts and instead mainly to gain enough personal experience with it to be able to take a position on philosophical and political issues. This means that I ask it specific questions – such as “What is information-theoretic security?” – and then I evaluate the answers.

And have you received satisfactory answers?
Glock: Yes, as far as I can judge. I must add, though, that I have learned more about each subject from articles by experts in the field than from the chatbot. ChatGPT has its limitations. I don’t feel an urge at present to let chatbots generate random texts. Chatbots are an interesting research subject, but nothing more.

Will chatbots alter your academic and scientific work – and research itself – in the long run?
Bubenhofer: I think they will. Chatbots, for example, can assist me in writing texts. They can summarize a paper, for instance, or can draft sections or paragraphs for which I provide the line of argument. ChatGPT is a combination of programming environment and typewriter. This means, for example, that I, as a linguist, can use it to perform a quantitative analysis of the frequency with which certain expressions appear in a body of test. And ChatGPT can generate the code to depict the analysis in a diagram. Student assistants used to do that.

So, you’re getting rid of student assistants?
Glock: The faculty has already done away with them, entirely without ChatGPT’s help (he laughs).

Bubenhofer: But AI and chatbots now give student assistants a chance to do more interesting things. Routine tasks such as compiling precise bibliographies or performing simple corpus research can confidently be delegated to AI systems. But this also means that we have to teach junior researchers the skills needed to proficiently use these systems.

Sennrich: I would say that generative large language models are a little like a Swiss army knife in that they have a multitude of different potential use applications. You can ask them for facts or can use them to modify or translate texts – all kinds of transformations are possible here.

AI evidently can even be trained to read gene or amino acid sequences and to model protein molecules derived from them. There appears to be no limits.
Sennrich: The models can basically learn to map an input sequence onto an output sequence. If the right training data is provided, such as data on amino acid sequences and corresponding proteins for example, similar models can also be used to study scientific questions in biology. The technology underlying chatbots can be employed very flexibly.

Glock: The extent to which AI in general and chatbots in particular can take over scientific work is a very interesting question, for instance in biochemistry, where protein structures can be calculated. This has prompted some philosophers of science to raise the question of whether theory building in research could become superfluous in the future. In other words, instead of putting thought into how things interrelate with each other under the laws of nature, one would collect data and have it computed probabilistically and then would obtain predictions without knowing exactly why they are correct in their particulars. This is why people are already talking about post-theory science. Personally, I don’t think that’s so wonderful, but then again I’m very old-fashioned.

Bubenhofer_Bild Quote

Generative large language models like ChatGPT do not possess any intelligence, but they can simulate intelligence.

Noah Bubenhofer
Linguist

What do you find not so wonderful?
Glock: Isaac Newton reclined beneath an apple tree and contemplated how a falling apple relates to Kepler’s laws that describe planetary motion around the sun. Today, Big Brother collects data on all apples that fall somewhere and then, based solely on that data, predicts the motion of the moons of Saturn, for example. Insight into the mechanism, into the underlying causal connections, becomes superfluous. If that becomes the case, then hyperbolically speaking, in the future universities will consist solely of Big Data, AI and an ethics committee.

How do the others see it? Will AI be the demise of theory?
Bubenhofer: The demise of theory was already postulated back in 2008 by Chris Anderson in an article titled The End of Theory: The Data Deluge Makes the Scientific Method Obsolete published in the magazine Wired. Google had gotten big at that time, and people became aware of all that’s available digitally. But theory isn’t dead at all. However, statistical models have enabled us to realize that other, completely different factors also play a role in predicting, for example, linguistic structures. AI gives us another perspective on language.

Glock: I agree that this quantitative statistical approach definitely has its merits, most notably also in the field of linguistics, but then the “why” question still remains unanswered.

ChatGPT is good at statistical analysis, but fails dismally at answering questions about reasons and causes. How intelligent is this AI system really?
Sennrich: Machines fundamentally function differently than we do and cannot be compared with human intelligence. Machines can do certain things very well, but are astoundingly bad at other things.

Then is “artificial intelligence” actually a misnomer?
Sennrich: It depends on how you define “intelligence”. The behavior of AI is indeed intelligent in a certain sense, but there’s a danger in comparing machines with humans. Large language models can solve certain tasks surprisingly well, but their ability to do so depends on what body of knowledge was fed into them and can be reproduced. When ChatGPT, for example, gets asked about a subject for which training data are missing, it usually won’t say that it doesn’t know anything about it, but instead merrily invents an answer.

Glock: Intelligence, in the most general sense, is the ability to solve even novel problems in a flexible way. It is thus closely connected with the ability to learn, and artificial neural networks are indeed very impressive in that regard. Inconsistencies in the ability to learn are a well-known phenomenon also among humans. Nevertheless, I think that we shouldn’t necessarily judge AI on the basis of this conception of intelligence.

Bubenhofer: In my opinion, generative large language models like ChatGPT are overrated as artificial intelligence but underrated as language models. It’s completely clear to me that they don’t possess any intelligence, but a language model can simulate natural intelligence, and that in itself is already quite a feat.

And yet, human abilities constantly get compared to the capabilities of AI in the discussion about ChatGPT. Is that problematic?
Bubenhofer: Yes, it is. That’s what I meant when I said that these language models were overrated as AI. In the discussion about them, they get anthropomorphized, or humanized. AI companies exploit that. They make it look as though we’re truly interacting with artificial intelligence. That’s just stagecraft, in my opinion.

It’s all just marketing, then?
Bubenhofer: It is marketing, but that doesn’t make it less dangerous. I believe that the language models themselves do not pose any danger, but AI development companies and their conduct do. Ultimately, it comes down to the importance that people ascribe to this AI and what they do with it. That’s the real problem.

Sennrich_Bild Quote

ChatGPT usually doesn’t say that it doesn’t know anything about a given subject, but instead merrily invents an answer.

Rico Sennrich
Computational linguist

Do you agree?
Sennrich: Undoubtedly, there is a lot of marketing and hype. Generative language models, though, have existed since the 1950s, and language models based on artificial neural networks have been around for roughly 20 years now. We understand quite well how they learn and what they can do. Many are amazed at their ability to learn to autocomplete texts when they are trained on massive quantities of data, but we also know that they cannot somehow become autonomous.

The signatories to the Future of Life Initiative see it differently. They are warning about the dangers that AI systems pose and are calling for a pause in AI development. Would you sign on to the initiative?
Glock: Some philosophers think that there’s a downright metaphysical guarantee that artificial systems are incapable of forming their own intentions and putting them into action. I view it differently. I don’t believe that this scenario can be ruled out in principle. Then again, I don’t see any factual reasons why a scenario of this sort should occur in the foreseeable future. We nonetheless imperatively have to develop risk analyses. It will become problematic if we leave that up to tech giants alone.

Bubenhofer: I recently watched a slew of 1980s TV shows about the proliferation of PCs and the question of how much computers would revolutionize the world of work. The shows discussed fears very similar to the ones being talked about again today. Even back then it was said that computers would relieve us of boring tasks, but would also take away the interesting work and render us superfluous. The present situation is definitely comparable with the invention of the computer. But we will find ways to deal with the new AI systems to put them to use sensibly and responsibly. That’s why I advocate for promoting AI literacy.

In what direction will these elaborate AI systems change life in our society?
Bubenhofer: I believe that AI systems will take over many text-generating tasks such as wording newswire dispatches in the field of journalism and functional texts such as form letters and instruction leaflets. Text reception will also change. In the future, readers themselves will choose the language that they wish to read a text in and whether they want to read all of it or just a condensed version.

Glock: Yes, I agree with that. However, the distinction between true and false doesn’t matter at all to AI systems. They’re Derridaesque machines incarnate, “il n’y a rien que le texte.” To them it’s simply a matter of predicting the next word. And that’s why I would be very glad if a human would look over newswire dispatches before they’re released.

Glock_Bild Quote

Universities in the future will then consist solely of Big Data, AI and an ethics committee.

Hans-Johann Glock
Philosopher

How will AI systems affect university life?
Bubenhofer: Our most important job at universities is to promote AI literacy, as I mentioned before. This means that we really have to challenge students and teaching staff as well to use these systems. Everyone, as far as possible, should be proficient at using them. We certainly also have to reform exam methods. And we also must ask ourselves what knowledge we should impart in the future. A great deal will change in this sphere. We should do our homework and think ahead to how AI will transform university education.

Glock: I, too, consider AI literacy a crucial message to get across. ChatGPT, first and foremost, is useful as a source of inspiration and as a starting point for composing texts – maybe not exactly for generic turndowns of invitations right now, but in almost any other area. And especially if ChatGPT is used as an encyclopedia, you have to ask yourself over and over: “Is that plausible, can that be right?” That’s why AI literacy now is just as important as computer literacy.

We and ChatGPT – how will this story continue to unfold in the future?
Bubenhofer: Two things happened with the rollout of computers that could now happen again. First, it gave rise to mindless tasks involving inputting large amounts of data into databases, particularly in the early days. But secondly, it also gave birth to more interesting jobs having to do with configuring computer systems. I believe a similar thing will happen again now.

Sennrich: In the context of the possibilities presented by machine translation programs, in the past there was discussion about whether translators would become unemployed. Since then, it has become clear that dislocations indeed occurred. Translators today increasingly review, edit and improve machine-translated texts and no longer translate foreign-language texts from scratch, resulting in gains in efficiency. On the other hand, though, many more texts get translated today compared to 20 years ago. So, I would suspect that demand for translators has held relatively steady in recent times. The new technology will surely render some jobs superfluous, but will also create new ones in functions that cannot yet be foreseen.

Glock: Many economists espouse the theory that disruptive technological developments always create just as many jobs as they destroy. I wouldn’t necessarily count on that. I think AI at first will make a lot of routine work redundant. But plumbers will continue to exist for the foreseeable future until the field of robotics makes quantum leaps, if it ever does. ChatGPT has nothing to offer here. And creative and intellectually challenging jobs will also still exist, but ones in the middle could come under pressure.

This interview appeared inUZH Magazine 2/23.

Weiterführende Informationen