AI and Health

Doctor Digital

Artificial intelligence is already commonplace in healthcare, lightening doctors’ workloads and helping them make medical decisions. The legal issues have yet to be resolved, however.

Michael T. Ganz

Kerstin Vokinger
Michael Krauthammer’s grand vision is a data warehouse in which medical data can be exchanged worldwide. (Image: Stefan Walter)

The doctor of the future sits in front of a screen, looking up the specific symptoms described to him by a patient during her last visit. Rather than combing through the countless health advice sites indexed by Google, he is searching through a global medical database that collects vast swathes of anonymized patient data and doctor’s reports. Artificial intelligence has stand­ardized the data and sorted it into categories, allowing doctors from every corner of the globe to access it with just a simple keyword search.

Faster and more precise

This scenario is currently still science fiction, but AI is already part of everyday life for a select group of medical professionals, mostly in hospitals. AI-based computer programs are now analyzing the tissue structure of mammograms, for in­stance – a welcome relief for radiologists, who have to evaluate over 10,000 X-rays every year. And that’s not all: AI is faster – and usually more precise – than humans. “AI doesn't get tired either. It doesn't need sleep,” says Michael Krauthammer, professor of medi­cal informatics at the UniversityHospital Zurich. Krauthammer researches the use of artificial intelligence in the healthcare sector.

Digital warehouse

Krauthammer’s grand vision is a data warehouse in which medical data can be exchanged worldwide. In order to make the right diagnoses and prescribe the right treatments, doctors have to make individualized decisions directly tailored to their patients. If it’s not a routine case, they have to turn to specialized knowledge: For in­stance, by reading clinical studies – a time-consuming undertaking. When it comes to illnesses that impact the elderly, there is relatively little data available. This presents a real challenge, with people living longer than ever and the age of the average patient climbing. “We should complement findings from clinical studies with data from day-to-day medical practice,” says Michael Krauthammer. This would extend doctors’ hori­zons beyond the few hundred patients they see in their office to hundreds of thou­sands of patients worldwide. Filling and maintaining this hypothetical digital warehouse with anonymized patient data is a complex task – and one that could only be achieved with the help of artificial intelligence.

Taking the first baby steps

How many patients took medication X over the course of a certain illness, and how many took medication Y? Which treatment was more successful? Does the more successful treatment have certain limitations? What are the contraindications? These kinds of questions could be quickly and easily answered with a targeted search in the data warehouse, helping doctors from Asia to Africa make medical decisions in their day-to-day duties. There is still a long way to go, however. “Up till now we’ve only taken baby steps,” says Krauthammer. For AI to be able to work with medical data, the data needs to be readable by a machine – which often isn't the case with today’s patient and hospital records. Then comes the next step of developing and training algorithms to compare and group patient data, which only works if the data is stand­ardized using the same terminology and formats. Switzerland’s five university hospitals have taken the first steps and created the Swiss Personalized Health Network, a nationwide repository of harmonized medical data – and tiny precursor to global harmonization efforts.

Better quality and risks

Michael Krauthammer is convinced that both doctors and patients stand to benefit from artificial intelligence. General practitioners would have easy access to expert knowledge, and patients would benefit from higher levels of consistency and quality. “But we can’t put all of our hopes in AI,” he warns. One possible risk is that algorithms get too used to certain clusters of symptoms and become incapable of identifying special cases. “There will undoubtedly be new mistakes that arise thanks to AI,” says Krauthammer.

Kerstin Vokinger
“Our legislation assumes that the work of a doctor is carried out by humans, not by machi­nes. We need to redefine the legal framework and adapt it to new developments,” says Kerstin N. Vokinger, professor for health law. (Image: Stefan Walter)

Bandages 

The biggest hurdle in using AI in healthcare is not technical, but legal. Kerstin Noëlle Vokinger studied both law and medicine. She currently serves as chair of health law and digitalization at the UZH Faculty of Law, where she explores questions such as how to regulate AI-based systems in the healthcare sector. As is so often the case with advances in digital technology, the legal frame­work has yet to catch up with the rapid changes. Currently the law divides medical aids into two basic categories: medicines, which are regulated by extremely strict approval procedures, and medical products, a category that ranges from bandages and hospital beds to prosthetic knees and pace­makers. The approval process for the latter category is much less strict.

Smart software

AI-based software is also considered to be a medical product. “This is despite the fact that a software error could potentially have the same devastating consequences as the side effects of some drugs,” says Vokinger. But how should medical AI be regulated? “Current legislation is reaching its limits in some areas,” Vokinger explains. She names one example from the United States, where diagnostic software for liver and lung cancer was recently approved on the basis of its comparability with other diagnostic methods – something that is allowed by US law. However, she conducted a study and found that software approved in this way was sometimes based on outdated medical findings from as far back as the 1970s. “The approval criteria are inadequate,” she says. “We need to consider a different regulatory approach.”

Regulatory balancing act

Regulatory sciences is the name of the field that deals with regulatory issues surrounding artificial intelligence, and it’s an area of research that is still in its infancy. According to Vokinger, this kind of regulation is a balancing act between ensuring the greatest possible level of patient safety while not slowing down technological progress. On top of that, modern AI-based software increasingly makes use of machine learning, which means that the underlying algorithms are constantly changing. An authorization procedure that gives one-time, indefinite approval to medical products might be insuf­ficient in this case. “Our legislation assumes that the work of a doctor is carried out by humans, not by machines,” explains Vokinger. “So we need to redefine the legal frame­work and adapt it to new developments.”

AI as added value 

She is convinced that these regulatory efforts will pay off and believes that solutions should be found for AI that serves the interest of patients – whether it’s in the form of radiology software or Michael Krauthammer’s vision of an international medical database. But will doctors ever be rendered obsolete by artificial intelligence? “No,” says Vokinger. “As we head into the future, AI will probably be an increasingly important tool for doctors, but it won’t replace them. Doctors will continue to treat their patients face-to-face. After all, human social skills will hardly ever be replaced by a robot.”

Michael T. Ganz is a freelance journalist; English translation by Gena Olson

Write Comment

The editorial team reserves the right to not publish comments. We will not publish anonymous, defamatory, racist, sexist, otherwise prejudiced, or irrelevant comments. UZH News will also not publish comments with advertising content.

Number of remaining characters: 1000