Ask Siri a question and you’ll immediately get a more or less useful answer. Digital translation systems can render any sentence that you wish from one language to another, be it English, Italian or Mandarin. Such systems that can digitally recognize and process spoken language are already surprisingly good. And yet, if you ask Siri et al. to process sign language, you won’t get very far at all. This, of course, prevents deaf people from accessing the digital world and drawing on many of the services that can make our lives a bit easier.
This wouldn’t be the case if, for example, an avatar – a virtual representation of a person – could transform speech into sign language and vice versa. Sarah Ebling, a computational linguist at UZH, is developing precisely such digital tools to process spoken language. It’s a very complex matter, since people who communicate through sign language not only use hand gestures, but also express themselves with their face and upper body. “If a digital system is to understand and interpret sign language, it has to be able to recognize and process all this information instantaneously,” says Ebling.
How to train your AI
The 36-year-old researcher is therefore currently “training” her system, so to speak. This involves feeding large swaths of data, including from films and TV programs translated into sign language, into an AI-based program. The computational linguist can use this data to gradually improve the program’s ability to translate words and sentences, and also present signs in a more natural and authentic way. “The latter is particularly important when it comes to gaining acceptance among deaf users,” explains Sarah Ebling, whose research involves working closely with the deaf community as well as the Swiss Federation for the Deaf. “Science in this area has to be participatory, otherwise it won’t succeed,” believes the researcher.
Simultaneously translating avatar
Her research and development projects could one day help facilitate and improve communication between the hearing impaired and the hearing, for example with the help of a mobile app that can immediately translate sign language into spoken words and the other way round. Or a translation system that can simultaneously interpret TV programs and films into sign language. Many TV programs nowadays feature captions for the hard of hearing. However, since the surrounding spoken language is foreign to many people whose first language is sign language, this doesn’t really help them that much. An avatar that can provide simultaneous interpreting into sign language, in contrast, would be a great help.
Unfamiliar linguistic space
Sarah Ebling came to her research topic out of sheer curiosity. During her studies at UZH, her interest in languages drove her to look into sign language. “I wanted to learn a language that was completely different from any spoken language,” she says. “With its visual-gestural element in three-dimensional space, sign language was like a whole new world to me.” And so the computational linguist delved into a linguistic space about which most hearing people know very little; for example, there is not one single sign language for all deaf people. Rather, there are many different sign languages, each with their own distinct grammar and language culture that developed naturally. Sarah Ebling has learned two sign languages so far, Swiss-German Sign Language and American Sign Language.
And the ambitious researcher is currently on her way to a professorship. As PhD candidate, she was fortunate enough to land a position at the Department of Computational Linguistics that wasn’t tied to an existing research project. “I was free to choose my research topic,” she says. The senior researcher thus chose to explore how language technology can help improve barrier-free access to the digital and online world – a little-researched topic with a strong user focus. This aspect is very important to Sarah Ebling. “My research field is very exciting from a scientific point of view, but it's also highly relevant to society,” says the researcher.
The same can be said of what is known as plain language, another topic that Sarah Ebling explores in her research. The websites of Swiss authorities and institutions as well as news platforms such as InfoEasy are increasingly providing texts that are written in simple and easy-to-understand language. Such texts help people with difficulties reading or understanding to gain access to information, and benefits people with cognitive impairments as well as children and language learners, for example.
Plain and simple
When it comes to expanding online content in plain language, having an automated translation tool that simplifies written language would be a great asset. This, too, is something the committed computational linguist is working on. As part of a research project funded by the Austrian Research Promotion Agency (FFG), she has joined forces with other researchers to develop an AI-based digital system that can semi-automatically translate relatively complex German text into plain language. The process involves feeding a complex text into the system, which produces a simplified version of the text. This initial suggestion then goes to a human translator, who produces a polished final version.
In the long term, the goal is to develop a fully automated system that can turn complex text into plain language without human support and thus make as much online content as possible accessible to users with disabilities. “In the end, we all stand to benefit from such advances,” believes the computational linguist. And for her, this is ultimately what research is all about – making a difference in society.
The editorial team reserves the right not to publish comments. In particular, anonymous, defamatory, racist, sexist, unobjective or off-topic comments as well as contributions with advertising content will not be considered.