Socially acceptable AI and fairness trade-offs in predictive analytics
At a glance
- Project leader : Prof. Dr. Christoph Heitz
- Co-project leader : Dr. Ulrich Leicht-Deobald, Dr. Michele Loi
- Project team : Corinna Hertweck, Dr. Christen Markus
- Project budget : CHF 619'808
- Project status : ongoing
- Funding partner : SNSF (NFP 77 «Digitale Transformation» / Projekt Nr. 187473)
- Project partner : Universität Zürich / Digital Society Initiative, Universität St. Gallen
- Contact person : Christoph Heitz
Description
Fairness and non-discrimination are basic requirements for
socially acceptable implementations of AI, as these are basic
values of our society. However, the relation between statistical
fairness concepts, the fairness perception of human stakeholders,
and principles discussed in philosophical ethics is not well
understood.
The objective of our project is to develop a methodology to
facilitate the development of fairness-by-design approaches for
AI-based decision-making systems. The core of this methodology is
the “Fairness Lab”, an IT environment for understanding, explaining
and visualizing the fairness implications of a ML-based decision
system. It will help companies to build socially accepted and
ethically justifiable AI applications, educate fairness to students
and developers, and support informed political decisions on
regulating AI-based decision making.
Conceptually, we integrate statistical approaches from computer
science and philosophical theories of justice and discrimination
into interdisciplinary theories of predictive fairness. With
empirical research, we study the fairness perception of different
stakeholders for aligning the theoretical approach. The utility of
the Fairness Lab as a tool for helping to create “fairer”
applications will be assessed in the context of participatory
design. With respect to application areas, we focus on employment
and education.
Our project makes a significant contribution to the understanding
of fairness in the digital transformation and to promoting improved
conditions for the deployment of fair and socially accepted AI.
Publications
-
Loi, Michele; Heitz, Christoph,
2022.
In:
2022 ACM Conference on Fairness, Accountability, and Transparency.
5th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Seoul, Republic of Korea, 21-24 June 2022.
Association for Computing Machinery.
pp. 2026-2034.
Available from: https://doi.org/10.1145/3531146.3533245
-
Hertweck, Corinna; Heitz, Christoph,
2021.
A systematic approach to group fairness in automated decision making [paper].
In:
Proceedings of the 8th SDS.
8th Swiss Conference on Data Science, Lucerne, Switzerland, 9 June 2021.
IEEE.
pp. 1-6.
Available from: https://doi.org/10.1109/SDS51136.2021.00008
-
2021.
Digitale Transformation : wie fair sind Algorithmen?.
In:
NGW-Vorträge Freitagabend, Winterthur (online), 12. März 2021.
Available from: https://www.ngw.ch/vortraege-archiv/digitale-transformation-wie-fair-sind-algorithmen