Suppressing racial bias in image recognition via domain adaptation
Published in Workshop on Circuits and Systems, 2019
Recommended citation: TEJERO-DE-PABLOS, A., & HARADA, T. (2019, August). Suppressing Racial Bias in Image Recognition via Domain Adaptation. In Workshop on Circuits and Systems (Vol. 32, pp. 99-104).
Since the emergence of deep learning, the number of autonomous systems integrated in our society has increased enormously. Deep learning methods are able to leverage huge amount of training data in order to learn how to extract the features that result in highest accuracies. Computer vision applications such as face recognition have been greatly benefited from deep learning. However, this dependency on the training data causes failures when the dataset is biased, that is, some members of the population are less likely to be included than others. For example, cases of face misclassification for minority groups (i.e., black people in the USA) have been already reported. While such misjudgment may not have a great impact in the overall performance of the method, it can lead to unethical situations such as racism and sexism. This paper tackles the recently-emerged problem of racist computer vision systems. We propose a domain adaptation methodology in order to adapt the features extracted for one majority racial group to other underrepresented groups. Our experimental results show the validity of our approach, opening a path for future research towards racial bias-free computer vision.
Bibtex:
@inproceedings{tejero2019suppressing,
title={Suppressing Racial Bias in Image Recognition via Domain Adaptation},
author={TEJERO-DE-PABLOS, Antonio and HARADA, Tatsuya},
booktitle={回路とシステムワークショップ論文集 Workshop on Circuits and Systems},
volume={32},
pages={99--104},
year={2019},
organization={[電子情報通信学会]}
}