Socially acceptable AI and fairness trade-offs in predictive analytics
Auf einen Blick
- Projektleiter/in : Prof. Dr. Christoph Heitz
- Co-Projektleiter/in : Dr. Ulrich Leicht-Deobald, Dr. Michele Loi
- Projektteam : Corinna Hertweck, Dr. Christen Markus
- Projektvolumen : CHF 619'808
- Projektstatus : laufend
- Drittmittelgeber : SNF (NFP 77 «Digitale Transformation» / Projekt Nr. 187473)
- Projektpartner : Universität Zürich / Digital Society Initiative, Universität St. Gallen
- Kontaktperson : Christoph Heitz
Beschreibung
Fairness and non-discrimination are basic requirements for
socially acceptable implementations of AI, as these are basic
values of our society. However, the relation between statistical
fairness concepts, the fairness perception of human stakeholders,
and principles discussed in philosophical ethics is not well
understood.
The objective of our project is to develop a methodology to
facilitate the development of fairness-by-design approaches for
AI-based decision-making systems. The core of this methodology is
the “Fairness Lab”, an IT environment for understanding, explaining
and visualizing the fairness implications of a ML-based decision
system. It will help companies to build socially accepted and
ethically justifiable AI applications, educate fairness to students
and developers, and support informed political decisions on
regulating AI-based decision making.
Conceptually, we integrate statistical approaches from computer
science and philosophical theories of justice and discrimination
into interdisciplinary theories of predictive fairness. With
empirical research, we study the fairness perception of different
stakeholders for aligning the theoretical approach. The utility of
the Fairness Lab as a tool for helping to create “fairer”
applications will be assessed in the context of participatory
design. With respect to application areas, we focus on employment
and education.
Our project makes a significant contribution to the understanding
of fairness in the digital transformation and to promoting improved
conditions for the deployment of fair and socially accepted AI.
Publikationen
-
Thouvenin, Florent; Volz, Stephanie; Weiner, Soraya; Heitz, Christoph,
2024.
Jusletter IT.
Verfügbar unter: https://doi.org/10.38023/9642ed9a-5c05-4884-b5b9-ebc66f2f3324
-
Baumann, Joachim; Sapiezynski, Piotr; Heitz, Christoph; Hannák, Anikó,
2024.
Fairness in online ad delivery [Paper].
In:
Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency.
7th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Rio de Janeiro, Brazil, 3-6 June 2024.
Association for Computing Machinery.
S. 1418-1432.
Verfügbar unter: https://doi.org/10.1145/3630106.3658980
-
Scantamburlo, Teresa; Baumann, Joachim; Heitz, Christoph,
2024.
On prediction-modelers and decision-makers : why fairness requires more than a fair prediction model.
AI & Society.
Verfügbar unter: https://doi.org/10.1007/s00146-024-01886-3
-
Baumann, Joachim; Castelnovo, Alessandro; Cosentini, Andrea; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele,
2023.
Bias on demand : investigating bias with a synthetic data generator [Paper].
In:
Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence.
32nd International Joint Conference on Artificial Intelligence (IJCAI), Macao, S.A.R, 19-25 August 2023.
International Joint Conferences on Artificial Intelligence Organization.
S. 7110-7114.
Verfügbar unter: https://doi.org/10.24963/ijcai.2023/828
-
Baumann, Joachim; Loi, Michele,
2023.
Fairness and risk : an ethical argument for a group fairness definition insurers can use.
Philosophy & Technology.
36(45).
Verfügbar unter: https://doi.org/10.1007/s13347-023-00624-9
-
Baumann, Joachim; Castelnovo, Alessandro; Crupi, Riccardo; Inverardi, Nicole; Regoli, Daniele,
2023.
Bias on demand : a modelling framework that generates synthetic data with bias [Paper].
In:
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency.
6th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Chicago, USA, 12-15 June 2023.
Association for Computing Machinery.
S. 1002-1013.
Verfügbar unter: https://doi.org/10.1145/3593013.3594058
-
Hertweck, Corinna; Räz, Tim,
2022.
Gradual (in)compatibility of fairness criteria [Paper].
In:
Proceedings of the 36th AAAI Conference on Artificial Intelligence : Vol. 36 No. 11: IAAI-22, EAAI-22, AAAI-22 Special Programs and Special Track, Student Papers and Demonstrations.
36th AAAI Conference on Artificial Intelligence, online, 22 February–1 March 2022.
Palo Alto, CA:
Association for the Advancement of Artificial Intelligence.
S. 11926-11934.
Verfügbar unter: https://doi.org/10.1609/aaai.v36i11.21450
-
Loi, Michele; Heitz, Christoph,
2022.
In:
Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency.
5th ACM Conference on Fairness, Accountability, and Transparency (FAccT), Seoul, Republic of Korea, 21-24 June 2022.
Association for Computing Machinery.
S. 2026-2034.
Verfügbar unter: https://doi.org/10.1145/3531146.3533245
-
Hertweck, Corinna; Heitz, Christoph,
2021.
A systematic approach to group fairness in automated decision making [Paper].
In:
Proceedings of the 8th SDS.
8th Swiss Conference on Data Science, Lucerne, Switzerland, 9 June 2021.
IEEE.
S. 1-6.
Verfügbar unter: https://doi.org/10.1109/SDS51136.2021.00008
-
2021.
Digitale Transformation : wie fair sind Algorithmen?.
In:
NGW-Vorträge Freitagabend, Winterthur (online), 12. März 2021.
Verfügbar unter: https://www.ngw.ch/vortraege-archiv/digitale-transformation-wie-fair-sind-algorithmen