The research group Human-Centered Computing (HCC) focuses on innovative natural interaction concepts between users and digital information. These interactions are increasingly mobile and multimodal and include new modalities such as gestures, speech input and output, and mixed reality interactions in addition to the classical interaction possibilities. Since today's users place high demands on the usability and user experience of mobile applications and services in particular, these must be developed in a user-centered and participatory manner from the very beginning and continuously evaluated with users in terms of usability.
When developing interaction concepts, it is also important to consider that the interaction is barrier-free for as many users as possible, including those with impairments or elderly people.
In Mobile Computing and Visual Computing we explore the latest interaction modalities (speech, gestures, augmented reality, virtual reality) on the one hand and deal with the user-centered development of innovative mobile applications and services with special requirements on the other hand. The sensor technology and interaction possibilities of today's and future mobile devices are used to achieve an optimal user experience.
Speech and sound (sonification) are important modalities of Natural User Interfaces, especially for applications with limited screen interaction (like smartwatches or hands-free AR). Deep-learning-based speech recognition has made tremendous progress in recent years, and promising applications are now possible, such as live-captioning of videos or conversational assistants.
The "AR Cloud" concept addresses web-based persistent enrichment of indoor and outdoor spaces. We develop collaborative AR/VR applications to make the content of the "Metaverse" actively designable for users and smartly experienceable thanks to "Scene Understanding".
Current research projects and topics
In the Innosuisse flagship project "Data-driven transformation of surgical education for proficiency-based performance", a completely new concept for the practical training of surgeons is being developed, based on innovative virtual and augmented reality simulators.
ICT-Accessibility focuses on the research and development of ICT-based solutions to reduce barriers for people with disabilities and older people, whether through barrier-free user interfaces, barrier-free access to digital information or barrier-free mobility. In doing so, we are increasingly using AI-based approaches.
Current research projects and topics
In the project "Accessible Scientific PDFs for all", which is jointly funded by the Swiss National Science Foundation and Innosuisse, we are researching how scientific PDF documents can be made accessible so that the scientific research literature can also be made accessible to people with visual impairments. This involves researching innovative deep-learning-based approaches to recognize formulas in PDFs and convert them into a readable and navigable form.
In addition, we are researching in various projects in the area of "Accessible Mobility" how people with mobility impairments can easily find accessible routes from a starting point to a destination and be guided safely along this route on the way.
In two projects funded by swissuniversities, we are looking at how to make studying accessible for people with impairments through the use of digital technologies. In collaboration with universities, we conduct joint PhDs in the field of accessibility.
We also like to evaluate websites and mobile apps for accessibility and develop tools and plugins to make digital documents accessible.
Mastering Accessibility of Open Educational Resources
This project is part of a wider swissuniversities initiative called “Swiss Digital Academy”, which deals with open educational resources and open educational platforms. The specific goal of the project “Mastering Accessibility of Open Educational Resources” is to provide knowledge of accessible open educational ...
ADDSA - Advanced Diagnostics Data and Service Architecture
In the ADDSA project, an advanced Data and Service Architecture is being developed to meet the industrial needs in equipment insights.
This demonstration shows to the participants the exciting possibilities of machine learning applied to speech, showcasing their benefits in various scenarios like helping regain the voices of the speech-impaired and improvement of dubbing technologies. Also, crucially, it helps demonstrate the evergrowing threats ...
Hutter, Hans-Peter; Tönz, Roger,
Aside 2005 : Applied Spoken Language Interaction in Distributed Environments : 10th and 11th November 2005, Aalborg University, Denmark.
Applied Spoken Language Interaction in Distributed Environments (ASIDE 2005), Aalborg, Denmark, 10-11 November 2005.
Design & Elektronik.
TIK-Schriftenreihe ; 15.
Available from: https://doi.org/10.3929/ethz-a-001687036
1995 International Conference on Acoustics, Speech, and Signal Processing (ICASSP).
1995 International Conference on Acoustics, Speech, and Signal Processing, Detroit, USA, 9-12 May 1995.
Available from: https://doi.org/10.1109/ICASSP.1995.479693