Delete search term


Quick navigation

Main navigation

Responsible AI Innovation Group

"Responsible AI Innovation requires addressing ethical and societal implications of technology in ways that are robust, human-centred and compatible with the realities of society, the industrial sector and policy making. We combine state-of-the-art research with proven expertise on technological translation, governance of emerging technology and multi-stakeholders engagement to promote AI technology for the common good."

Dr. Ricardo Chavarriaga

Fields of Expertise

  • Responsible Research and Innovation
  • Ethically aligned design of intelligent and autonomous systems
  • Governance of AI and Neurotechnology

The "Responsible AI Innovation" (RAI) group focuses on technical, governance and ethical aspects of AI-supported innovation. The research focus of the group is to identify technical and non-technical approaches that enable organizations to successfully translate AI-technologies into solutions that promote economical and societal good. We have a special interest in applications that have high impact on humans, and society  such as health, neuroscience, human-machine interaction and machine-based decision making. Our work address end-to-end governance factors that influence innovation ranging from  organisational governance, regulatory and certification requirements, socio-technical standards, Trustworthy AI and human-centred technology.

As responsible Innovation requires effective coordination of multiple stakeholders and fields of discipline, RAI collaborates with an extensive network of collaborations with national and intrnational organizations including CLAIRE, ADRA, SATW  IEEE Standards Organization, IEEE Brain, The Geneva Center for Security Policy, GESDA, OECD, and the Institute for Neuroethics.

The Responsible AI Innovation group was created in September 2023, before that date this line of research was performed within the MPC group. Some of the projects started prior to that date are: