How We've Taught Algorithms to See Identity: Constructing Race and Gender in Image Databases for Facial Analysis
Citations Over TimeTop 10% of 2020 papers
Abstract
Race and gender have long sociopolitical histories of classification in technical infrastructures-from the passport to social media. Facial analysis technologies are particularly pertinent to understanding how identity is operationalized in new technical systems. What facial analysis technologies can do is determined by the data available to train and evaluate them with. In this study, we specifically focus on this data by examining how race and gender are defined and annotated in image databases used for facial analysis. We found that the majority of image databases rarely contain underlying source material for how those identities are defined. Further, when they are annotated with race and gender information, database authors rarely describe the process of annotation. Instead, classifications of race and gender are portrayed as insignificant, indisputable, and apolitical. We discuss the limitations of these approaches given the sociohistorical nature of race and gender. We posit that the lack of critical engagement with this nature renders databases opaque and less trustworthy. We conclude by encouraging database authors to address both the histories of classification inherently embedded into race and gender, as well as their positionality in embedding such classifications.
Related Papers
- → A Deeper Look at the “Neural Correlate of Consciousness”(2016)47 cited
- → Design for Values and the Definition, Specification, and Operationalization of Values(2015)11 cited
- → Design for Values and the Definition, Specification, and Operationalization of Values(2014)9 cited
- → Towards Decentralized Operationalization of Zero Trust Architecture(2022)1 cited
- → Towards Decentralized Operationalization of Zero Trust Architecture(2022)