Navigation auf uzh.ch

Suche

UZH News

Chatbots and teaching

“Responsibility lies with the human being, not artificial intelligence”

Chatbots pose a challenge to university teaching: where can they be helpful, where not? Thomas Hidber, Head of Educational Development at UZH, advocates a selective approach to their usage – to relieve students of routine work, for example. However, it’s all the more important that the use of AI systems is made transparent.
Interview by Stefan Stöcklin; English translation by Karen Oettli-Geddes

Kategorien

Students should acquire a reflective and confident approach to artificial intelligence. (Image: shutterstock/ascannio)

Thomas Hidber, with the rise of Chat GTP, generative artificial intelligence (AI) has not only reached broad swathes of society but now universities too. Are you amazed at the capabilities of these systems?

Like many others I guess, when I first experimented with ChatGPT back in December 2022, I was also surprised at how the system could generate a coherent and stylistically appropriate text on questions of all kinds in just a matter of seconds. Even though a closer look later quickly revealed its limitations – in factuality, citations and judgment skills, for example – I was impressed. What’s also remarkable is the speed at which it has developed in the few months since then – whether we’re talking about GPT-4 or image, video, sound or code generators.

As Head of Educational Development at UZH, you’ve written a memo laying out key points on the subject of AI. In it you make a case for incorporating these tools into teaching and learning. Why?

AI-supported content generators will have an ever-growing influence on most aspects of our life and work, including academia of course, and ultimately lead to changes. It will be expected of our graduates, and rightly so, that they have the ability to critically reflect on the development and impact of AI, both in their field and in the wider context, use its tools confidently and responsibly, and then be able to review and critically question the corresponding results.

In the future, data and AI literacy will form key competences in all academic fields. We need to find and adopt an informed and reflected way of dealing with these systems in compliance with the rules of academic integrity. This applies not only to research, where this has been the case for some time, but also to teaching and learning.

 

 

Thomas Hidber

It will be expected of our graduates, and rightly so, that they have the ability to critically reflect on the development and impact of AI, both in their field and in the wider context.

Thomas Hidber
Head of Educational Development

AI systems hold opportunities and risks. In terms of teaching and learning, what beneficial aspects or particular advantages do you see?

Teachers can use the technology as an aid in creating learning materials, presentations, exam questions and (incorrect) multiple choice answers, codes or quizzes. It will also help them scan and summarize existing resources and literature more quickly; or will translate the materials into their own language. With the time this frees up, teachers can evolve educational concepts, implement innovations, or set up collaborations, etc.

The systems will enable students to gain further perspectives on a topic, generate summaries and then concentrate on a closer reading of the corresponding literature. Text generators can also help overcome writer's block, correct a student’s own texts and, if necessary, make stylistic improvements.

In general, the systems provide an opportunity for students to focus less on routine tasks and more on critical and disruptive thinking – on creativity and the ability to innovate in their respective field, and on debating skills, empathy and social competence.

Where do you see the risks?

Academic misconduct is probably as old as academia itself, even if it hasn’t always been understood in the same way. However, when it comes to misrepresenting an achievement or finding as ones own, AI-powered content generators pose a new challenge to the traditional understanding of authorship. Research publishers such as Springer Nature have responded by prohibiting the listing of AI systems as co-authors, on the grounds that responsibility for verified factuality can lie only with human authorship. Secondly, they demand of researchers that their use of assistance systems is transparent and documented. Unlike in the case of “traditional” plagiarism, the longer generative AI is used, the less detectable it will be by means of detection software. This heightens the significance of academic integrity principles all the more.

Could academia lose credibility?


Indeed, the greatest danger for academia lies in the considerable reputational risks to which an ill considered and careless – or even destructive – use of generative AI can lead. We can assume, for example, that the output of academic publications with little novelty value or knowledge gain will continue to increase. An even greater danger comes from the ease with which the systems could run pseudo-scientific disinformation campaigns aimed at a wide-reaching audience.

 

In general, the systems provide an opportunity for students to focus less on routine tasks and more on critical and disruptive thinking.

You see opportunities regarding AI in the classroom. Could you describe two or three concrete examples of how these systems could be useful when teaching students?

It can be enlightening to have groups of students work on a question in parallel, first with, and then without, AI support. They can then critically compare the results. Or students can learn about the limitations of AI by assessing machine-generated research reports and bibliographies.

“Prompt engineering” and “prompt revision” are methods that aim to find the most useful possible formulations to prompt AI content generators. This is not only a valuable competence for students to acquire, but also offers them the opportunity to learn how to actively apply AI systems in their field. A task for students – alone or in groups – could therefore be to formulate suitable prompts for challenges typical to their field; or, to use the iterative “prompt revision” method to arrive at useful outputs.

Instructors could discuss with students the different contexts in which the use of AI systems would help in tackling challenges in their field; and then also examine which skills they need to achieve results in their profession, within or outside academia, without using AI. This would promote their intrinsic motivation for independent learning.

Will AI lead to a preference for oral over written exams in the future?

Overall, instructors are advised to revise the formats of assessments and their corresponding questions. Where student numbers allow, it makes sense to adopt different forms of assessments, including interactive oral formats such as panel discussions, debates or short presentations. However, in study programs with high student numbers, online written exams will continue to play a central role. These are now usually conducted in a controlled environment on campus and on a safe exam browser; or they’re explicitly designed as open book exams where the use of aids is permitted and questions are formulated in such a way that the use of AI content generators creates no added value. There may be benefits to trying out other written exam formats in the future such as writing short essays under supervision on campus or designing audiovisual media such as a video, podcast or website content.

 

In view of the growing importance of generative AI, there’s now the added need for students to reflect on the significance of authorship.

Will specialist expertise lose importance because of AI systems or, on the contrary, will it become more important?

Generative AI produces content based on probabilities, not factuality. The ability to independently verify information and hypotheses will become an even greater key academic competence. Responsibility for the result lies entirely with the human author; it cannot be delegated to any assistance systems.

In light of the advent of AI systems, you advocate teaching students right from the start of their studies the principles of good research practice, i.e., academic integrity. Why?

Besides introducing students to the basic principles and methods in their field, teaching and applying the principles of academic integrity have always been a core element of the introductory phase of any study program. In view of the growing importance of generative AI, there’s now the added need for students to reflect on the significance of authorship. This reflection should also consider to what extent – and under which transparency rules – these systems can be used in compliance with the principles of good academic practice.

So adherence to academic standards, i.e. academic integrity, is pivotal for dealing judiciously with AI systems?

Absolutely. This is also the view of the European University Association, the umbrella organization covering all European universities and university rectors conferences. After all, what’s ultimately at stake is nothing less than the preservation of the credibility and reputation of universities and academia as a whole – the academy’s greatest asset and the basis of social acceptance.

Are there any binding guidelines or recommendations planned by the Executive Board of the University (or other bodies) for the use of AI systems in teaching?

University-wide binding guidelines for the use of generative AI in teaching are not planned, as their implications vary greatly according to the field of study. It’s therefore up to each subject’s communities, institutes and faculties to publish subject-specific guidelines if necessary. Students need to know for what purposes and under which conditions they’re permitted to use generative AI tools – such as when creating written papers, for example. As already mentioned, most important is that students guarantee transparent documentation of their use of these assistance systems and full responsibility of authorship. This said, a small joint working group from the Offices of the Vice Presidents Research and Education has been set up to examine whether our regulations regarding academic integrity need to be supplemented or amended.

 

University-wide binding guidelines for the use of generative AI in teaching are not planned, as their implications vary greatly according to the field of study.

AI systems such as ChatGPT are also changing the professional world – in sectors like text creation (advertising, journalism) or law, for example. What does this mean for universities?

Those responsible for study programs that have a clear professional focus –such as law, psychology, or medicine – will need to discuss closely with the various professional associations how graduates’ skills profiles may need to be amended and what this means for each subject’s curriculum. In addition, UZH is enabling students to gain knowledge and skills in future-oriented fields such as machine learning or the social impact of digital transformation by providing cross-faculty courses at the School for Transdisciplinary Studies. Finally, student advisory services and career counseling centers are also called upon to anticipate developments in the different professional fields and include them in their counsel.

With panel discussions, events, and courses of continuing education, UZH is seriously engaging with the topic of generative AI. What support is it giving its lecturers?

We organize various events, such as those in our series “Teaching Inspiration” in which participants share experiences of good practice. On 30 March, for example, over 60 lecturers took part in a forum discussing specific questions on the subject of AI. With the other Zurich universities also participating in the “LernLab – Higher Education for Digital Skills” project, we’re organizing a series of webinars on the topic of AI in university teaching; and on our Teaching Tools platform we have a list of links and a collection of materials that we keep updated, plus news about upcoming events. Soon to appear on the platform are concrete recommendations and suggestions specifically for UZH lecturers on the use of AI in teaching and on how to deal with student assessments. Finally, with the “open_innovation” funding line of the UZH Teaching Fund (ULF), lecturers and program coordinators can apply for funding for innovative, exemplary and curriculum-relevant development projects, such as the integration of AI in teaching.

AI is developing in leaps and bounds – what advances do you expect in the next two or three years?


The current pace of development is remarkably fast, yet many ethical and legal issues still need to be resolved – which has led the tech giants around Elon Musk to call for a six-month moratorium and Italy to ban ChatGPT altogether. Nonetheless, we can continue to expect rapid progress in the development of comprehensive multimodal systems, and soon also of options for training these systems on personal idiosyncrasies such as writing style, etc. Some people claim to have already detected “sparks” of general artificial intelligence in GPT-4. There’s still a huge amount to come.

Weiterführende Informationen