Responsible AI Innovation Group
"Responsible AI Innovation requires addressing ethical and societal implications of technology in ways that are robust, human-centred and compatible with the realities of society, the industrial sector and policy making. We combine state-of-the-art research with proven expertise on technological translation, governance of emerging technology and multi-stakeholders engagement to promote AI technology for the common good."
- Responsible Research and Innovation
- Ethically aligned design of intelligent and autonomous systems
- Governance of AI and Neurotechnology
The "Responsible AI Innovation" (RAI) group focuses on technical, governance and ethical aspects of AI-supported innovation. The research focus of the group is to identify technical and non-technical approaches that enable organizations to successfully translate AI-technologies into solutions that promote economical and societal good. We have a special interest in applications that have high impact on humans, and society such as health, neuroscience, human-machine interaction and machine-based decision making. Our work address end-to-end governance factors that influence innovation ranging from organisational governance, regulatory and certification requirements, socio-technical standards, Trustworthy AI and human-centred technology.
As responsible Innovation requires effective coordination of multiple stakeholders and fields of discipline, RAI collaborates with an extensive network of collaborations with national and intrnational organizations including CLAIRE, ADRA, SATW IEEE Standards Organization, IEEE Brain, The Geneva Center for Security Policy, GESDA, OECD, and the Institute for Neuroethics.
The Responsible AI Innovation group was created in September 2023, before that date this line of research was performed within the MPC group. Some of the projects started prior to that date are:
- Insight: keynotes, trainings, public outreach, scientific diplomacy
- AI consultancy: workshops, expert support, advise, technology and ethical assessment
- Research and development: small to large-scale collaborative projects, third party-funded research, student projects, commercially applicable prototypes
AI4REALNET: AI for REAL-world NETwork operation
The scope of AI4REALNET covers the perspective of AI-based solutions addressing critical systems (electricity, railway, and air traffic management) modelled by networks that can be simulated, and are traditionally operated by humans, and where AI systemscomplement and augment human abilities. It has two main ...
Enabling Scientific Diplomacy: Preparation of the GESDA Neurotechnology Compass
The Geneva Science and Diplomacy Anticipator (GESDA) is developing the “Neurotechnology Compass” (the NeuroTech Compass – NTC). The NTC aims A tool aimed at equipping decision-makers with the necessary navigation tools to best support research in neuroscience and neurotechnology and their applications in society. By ...
Denzel, Philipp; Brunner, Stefan; Luley, Paul-Philipp; Frischknecht-Gruber, Carmen; Reif, Monika Ulrike; Schilling, Frank-Peter; Amini, Amin; Repetto, Marco; Iranfar, Arman; Weng, Joanna; Chavarriaga, Ricardo,
Explainable AI in Medicine Workshop, Lugano, Switzerland, 2-3 November 2023.
Chavarriaga, Ricardo; Rickli, Jean-Marc; Mantellassi, Federico,
Strategic Security Analyses
Geneva Centre for Security Policy.
Available from: https://doi.org/10.21256/zhaw-28985