Delete search term

Header

Main navigation

School of Engineering

Explainable Artificial Intelligence Group

"Explainable, trustworthy AI-powered products able to utilise the valuable information from complex, multimodal data and engage seamlessly with human experts and users, are the key to the successful application and adoption of AI in practice and harnessing its huge potential to revolutionise sectors and industries and benefit humanity."

Dr. Jasmina Bogojeska

Fields of Expertise

  • Machine Learning
  • Deep Learning
  • Explainable/trustworthy AI
  • Multimodal AI
  • Healthcare

The research focus of the XAI group is on machine learning and deep learning methodology, particularly explainable, trustworthy and multimodal AI, to address complex decision-making and knowledge-discovery tasks in different domains. Our ultimate goal is to enable the successful application of these methods in products in practice functioning hand-in-hand with domain experts and users. More specifically, we are addressing the challenging research problem of developing AI-powered systems able to properly utilise multimodal data, provide human intelligible information about their outputs and engage seamlessly with users. Reinforcement learning and causal inference are two additional areas of interest very relevant for sequential decision making and knowledge discovery with human in the loop. Finally, while we are up for taking on challenging problems in various domains, we are particularly interested in advancing, improving, and digitising the healthcare domain. We envision this by building explainable AI-powered products for disease diagnosis and treatment, as well as developing transparent AI-powered approaches to advance the understanding of health and disease, all done collaboratively with users (experts and patients) in the loop, safely and responsibly.

Services

Team

Projects

Publications