Delete search term


Quick navigation

Main navigation

Human-Centered Computing

We Design Human Interaction with Information

The research group Human-Centered Computing (HCC) focuses on innovative natural interaction concepts between users and digital information. These interactions are increasingly mobile and multimodal and include new modalities such as gestures, speech input and output, and mixed reality interactions in addition to the classical interaction possibilities. Since today's users place high demands on the usability and user experience of mobile applications and services in particular, these must be developed in a user-centered and participatory manner from the very beginning and continuously evaluated with users in terms of usability.

When developing interaction concepts, it is also important to consider that the interaction is barrier-free for as many users as possible, including those with impairments or elderly people.

Research Areas

Mobile and Visual Computing

In Mobile Computing and Visual Computing we explore the latest interaction modalities (speech, gestures, augmented reality, virtual reality) on the one hand and deal with the user-centered development of innovative mobile applications and services with special requirements on the other hand. The sensor technology and interaction possibilities of today's and future mobile devices are used to achieve an optimal user experience.

Speech and sound (sonification) are important modalities of Natural User Interfaces, especially for applications with limited screen interaction (like smartwatches or hands-free AR). Deep-learning-based speech recognition has made tremendous progress in recent years, and promising applications are now possible, such as live-captioning of videos or conversational assistants.

The "AR Cloud" concept addresses web-based persistent enrichment of indoor and outdoor spaces. We develop collaborative AR/VR applications to make the content of the "Metaverse" actively designable for users and smartly experienceable thanks to "Scene Understanding".

Current research projects and topics

In the Innosuisse flagship project "Data-driven transformation of surgical education for proficiency-based performance", a completely new concept for the practical training of surgeons is being developed, based on innovative virtual and augmented reality simulators.


ICT-Accessibility focuses on the research and development of ICT-based solutions to reduce barriers for people with disabilities and older people, whether through barrier-free user interfaces, barrier-free access to digital information or barrier-free mobility. In doing so, we are increasingly using AI-based approaches.

Current research projects and topics

In the project "Accessible Scientific PDFs for all", which is jointly funded by the Swiss National Science Foundation and Innosuisse, we are researching how scientific PDF documents can be made accessible so that the scientific research literature can also be made accessible to people with visual impairments. This involves researching innovative deep-learning-based approaches to recognize formulas in PDFs and convert them into a readable and navigable form.

In addition, we are researching in various projects in the area of "Accessible Mobility" how people with mobility impairments can easily find accessible routes from a starting point to a destination and be guided safely along this route on the way.

In two projects funded by swissuniversities, we are looking at how to make studying accessible for people with impairments through the use of digital technologies. In collaboration with universities, we conduct joint PhDs in the field of accessibility.

We also like to evaluate websites and mobile apps for accessibility and develop tools and plugins to make digital documents accessible.