Fairness in Machine Learning

The idea behind this project was to use human names as a proxy for protected attributes (gender, age, ethnicity, race, etc.) Classification loss is supplemented with the loss minimizing correspondence between name clusters and predictions (specifically, occupation and income).

Publications

M. De-Arteaga A. Romanov H. Wallach J. Chayes C. Borgs A. Chouldechova S. Geyik K. Kenthapadi A. Kalai What's in a Name? Reducing Bias in Bios without Access to Protected Attributes. Proceedings of NAACL-HLT 2019. Minneapolis, MN.

@inproceedings{romanov2019s, title={What's in a Name? Reducing Bias in Bios without Access to Protected Attributes}, author={Romanov, Alexey and De-Arteaga, Maria and Wallach, Hanna and Chayes, Jennifer and Borgs, Christian and Chouldechova, Alexandra and Geyik, Sahin and Kenthapadi, Krishnaram and Rumshisky, Anna and Kalai, Adam}, booktitle={Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)}, pages={4187–4195}, year={2019} }