Socially acceptable AI and fairness trade-offs in predictive analytics
At a glance
- Project leader : Prof. Dr. Christoph Heitz
- Co-project leader : Dr. Ulrich Leicht-Deobald, Dr. Michele Loi
- Project team : Corinna Hertweck, Dr. Christen Markus
- Project budget : CHF 619'808
- Project status : ongoing
- Funding partner : SNSF (NFP 77 «Digitale Transformation» / Projekt Nr. 187473)
- Project partner : Universität Zürich / Digital Society Initiative, Universität St. Gallen
- Contact person : Christoph Heitz
Description
Fairness and non-discrimination are basic requirements for
socially acceptable implementations of AI, as these are basic
values of our society. However, the relation between statistical
fairness concepts, the fairness perception of human stakeholders,
and principles discussed in philosophical ethics is not well
understood.
The objective of our project is to develop a methodology to
facilitate the development of fairness-by-design approaches for
AI-based decision-making systems. The core of this methodology is
the “Fairness Lab”, an IT environment for understanding, explaining
and visualizing the fairness implications of a ML-based decision
system. It will help companies to build socially accepted and
ethically justifiable AI applications, educate fairness to students
and developers, and support informed political decisions on
regulating AI-based decision making.
Conceptually, we integrate statistical approaches from computer
science and philosophical theories of justice and discrimination
into interdisciplinary theories of predictive fairness. With
empirical research, we study the fairness perception of different
stakeholders for aligning the theoretical approach. The utility of
the Fairness Lab as a tool for helping to create “fairer”
applications will be assessed in the context of participatory
design. With respect to application areas, we focus on employment
and education.
Our project makes a significant contribution to the understanding
of fairness in the digital transformation and to promoting improved
conditions for the deployment of fair and socially accepted AI.
Publications
-
Scantamburlo, Teresa; Baumann, Joachim; Heitz, Christoph,
2024.
On prediction-modelers and decision-makers : why fairness requires more than a fair prediction model.
AI & Society.
Available from: https://doi.org/10.1007/s00146-024-01886-3
-
Baumann, Joachim; Castelnovo, Alessandro; Cosentini, Andrea; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele,
2023.
Bias on demand : investigating bias with a synthetic data generator [paper].
In:
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence.
32nd International Joint Conference on Artificial Intelligence (IJCAI), Macao, S.A.R, 19-25 August 2023.
International Joint Conferences on Artificial Intelligence Organization.
pp. 7110-7114.
Available from: https://doi.org/10.24963/ijcai.2023/828
-
Baumann, Joachim; Loi, Michele,
2023.
Fairness and risk : an ethical argument for a group fairness definition insurers can use.
Philosophy & Technology.
36(45).
Available from: https://doi.org/10.1007/s13347-023-00624-9
-
Baumann, Joachim; Castelnovo, Alessandro; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele,
2023.
Bias on demand : a modelling framework that generates synthetic data with bias [paper].
In:
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Chicago, USA, 12-15 June 2023.
Association for Computing Machinery.
pp. 1002-1013.
Available from: https://doi.org/10.1145/3593013.3594058
-
Hertweck, Corinna; Räz, Tim,
2022.
Gradual (in)compatibility of fairness criteria [paper].
In:
Proceedings of the 36th AAAI Conference on Artificial Intelligence : Vol. 36 No. 11: IAAI-22, EAAI-22, AAAI-22 Special Programs and Special Track, Student Papers and Demonstrations.
36th AAAI Conference on Artificial Intelligence, online, 22 February–1 March 2022.
Palo Alto, CA:
Association for the Advancement of Artificial Intelligence.
pp. 11926-11934.
Available from: https://doi.org/10.1609/aaai.v36i11.21450
-
Loi, Michele; Heitz, Christoph,
2022.
In:
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
5th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Seoul, Republic of Korea, 21-24 June 2022.
Association for Computing Machinery.
pp. 2026-2034.
Available from: https://doi.org/10.1145/3531146.3533245
-
Hertweck, Corinna; Heitz, Christoph,
2021.
A systematic approach to group fairness in automated decision making [paper].
In:
Proceedings of the 8th SDS.
8th Swiss Conference on Data Science, Lucerne, Switzerland, 9 June 2021.
IEEE.
pp. 1-6.
Available from: https://doi.org/10.1109/SDS51136.2021.00008
-
2021.
Digitale Transformation : wie fair sind Algorithmen?.
In:
NGW-Vorträge Freitagabend, Winterthur (online), 12. März 2021.
Available from: https://www.ngw.ch/vortraege-archiv/digitale-transformation-wie-fair-sind-algorithmen