Explainable Artificial Intelligence Group

"Explainable, trustworthy AI-powered products able to utilise the valuable information from complex, multimodal data and engage seamlessly with human experts and users, are the key to the successful application and adoption of AI in practice and harnessing its huge potential to revolutionise sectors and industries and benefit humanity."
Fields of Expertise
- Machine Learning
- Deep Learning
- Explainable/trustworthy AI
- Multimodal AI
- Healthcare
The research focus of the XAI group is on machine learning and deep learning methodology, particularly explainable, trustworthy and multimodal AI, to address complex decision-making and knowledge-discovery tasks in different domains. Our ultimate goal is to enable the successful application of these methods in products in practice functioning hand-in-hand with domain experts and users. More specifically, we are addressing the challenging research problem of developing AI-powered systems able to properly utilise multimodal data, provide human intelligible information about their outputs and engage seamlessly with users. Reinforcement learning and causal inference are two additional areas of interest very relevant for sequential decision making and knowledge discovery with human in the loop. Finally, while we are up for taking on challenging problems in various domains, we are particularly interested in advancing, improving, and digitising the healthcare domain. We envision this by building explainable AI-powered products for disease diagnosis and treatment, as well as developing transparent AI-powered approaches to advance the understanding of health and disease, all done collaboratively with users (experts and patients) in the loop, safely and responsibly.
Services
- Insight: keynotes, trainings
- AI consultancy: workshops, expert support, advise, technology assessment
- Research and development: small to large-scale research projects, third party-funded research, student projects, commercially applicable prototypes
Team
Head of Research Group
Projects
Unfortunately, no list of projects can be displayed here at the moment. Until the list is available again, the project search on the ZHAW homepage can be used.
Projekte
-
Reliable Conversational Domain-Specific Data Exploration and Analysis (ARMADA)
The ARMADA doctoral network aims at training 15 versatile and interconnected Early-Stage Researchers (ESRs) to specialise in the overarching area of Conversational Artificial Intelligence (Conversational AI).
ongoing, 03/2025 - 02/2029
-
LINA: Shared Large-scale Infrastructure for the Development and Safe Testing of Autonomous Systems
The goal of the LINA consortium is to create a basis for the largest European (real and virtual) infrastructure for research, development and safe testing of autonomous systems / UAS (Unmanned Aircraft System) for commercial products in the canton of Zurich.
ongoing, 08/2022 - 12/2027
Publications
- Previous Page
- Page 01
- Page 02
-
Kimmich, Maximilian; Bartezzaghi, Andrea; Bogojeska, Jasmina; Malossi, Christiano; Vu, Ngoc Thang,
2024.
Combining data generation and active learning for low-resource question answering[paper].
In:
33rd International Conference on Artificial Neural Networks (ICANN), Lugano, Switzerland, 17-20 September 2024.
Switzerland:
Springer.
pp. 131-147.
Lecture Notes in Computer Science (LNCS) ; 15022.
Available from: https://doi.org/10.1007/978-3-031-72350-6_9
-
Wertz, Lukas; Bogojeska, Jasmina; Mirylenka, Katsiaryna; Kuhn, Jonas,
2023.
Reinforced active learning for low-resource, domain-specific, multi-label text classification[paper].
In:
Findings of the Association for Computational Linguistics: ACL 2023.
61st Annual Meeting of the Association for Computational Linguistics (ACL), Toronto, Canada, 9-14 July 2023.
Stroudsburg, PA:
Association for Computational Linguistics (ACL).
Available from: https://doi.org/10.18653/v1/2023.findings-acl.697