Delete search term

Header

Quick navigation

Main navigation

Bias, awareness, and ignorance in deep-learning-based face recognition

Humanity has developed successful approaches to overcome human biases. Our recent study shows why we need different approaches to reduce bias in machines.

In a recent paper in the AI and Ethics journal, an interdisciplinary team of ZHAW researchers explored an intuitive technique for reducing bias in modern face recognition methods: “blinding”. Blinding removes the awareness of neural network models to sensitive features, such as gender or race, and has been used in the literature before as a valid approach to debiasing. However, we show why this approach cannot be used to reduce bias.

Even when not designed for this task, facial recognition models can deduce sensitive features, such as gender or race, from pictures of faces—simply because they are trained to determine the “similarity” of pictures. This means that people with similar skin tones, similar hair length, etc. will be seen as similar by facial recognition models. When confronted with biased decision-making by humans, one approach taken in job application screening is to “blind” the human decision-makers to sensitive attributes such as gender and race by not showing pictures of the applicants. Based on a similar idea, one might think that if facial recognition models were less aware of these sensitive features, the difference in accuracy between groups would decrease. We evaluate this assumption—which has already penetrated into the scientific literature as a valid de-biasing method—by measuring how “aware” models are of sensitive features and correlating this with differences in accuracy. In particular, we blind pre-trained models to make them less aware of sensitive attributes. We find that awareness and accuracy do not positively correlate, i.e., that bias ≠ awareness. In fact, blinding barely affects accuracy in our experiments. The seemingly simple solution of decreasing bias in facial recognition rates by reducing awareness of sensitive features does thus not work in practice: trying to ignore sensitive attributes is not a viable concept for less biased FR.

The full study is available open access in the paper “Bias, awareness, and ignorance in deep-learning-based face recognition” by Samuel Wehrli (ZHAW School of Social Work), PhD students Corinna Hertweck and Mohammadreza Amirian from the ZHAW School of Engineering), Stefan Glüge (ZHAW School of Life Sciences and Facility Management) and Thilo Stadelmann (ZHAW Centre for Artificial Intelligence). It is a late result from the Innosuisse-funded research project Libra together with Winterthur-based company Deep Impact AG.