A survey on datasets for fairness‐aware machine learning
Citations Over TimeTop 10% of 2022 papers
Abstract
Abstract As decision‐making increasingly relies on machine learning (ML) and (big) data, the issue of fairness in data‐driven artificial intelligence systems is receiving increasing attention from both research and industry. A large variety of fairness‐aware ML solutions have been proposed which involve fairness‐related interventions in the data, learning algorithms, and/or model outputs. However, a vital part of proposing new approaches is evaluating them empirically on benchmark datasets that represent realistic and diverse settings. Therefore, in this paper, we overview real‐world datasets used for fairness‐aware ML. We focus on tabular data as the most common data representation for fairness‐aware ML. We start our analysis by identifying relationships between the different attributes, particularly with respect to protected attributes and class attribute, using a Bayesian network. For a deeper understanding of bias in the datasets, we investigate interesting relationships using exploratory analysis. This article is categorized under: Commercial, Legal, and Ethical Issues > Fairness in Data Mining Fundamental Concepts of Data and Knowledge > Data Concepts Technologies > Data Preprocessing
Related Papers
- → Fairness-Aware Ranking in Search & Recommendation Systems with Application to LinkedIn Talent Search(2019)324 cited
- → Context-Aware Recommendations Based on Deep Learning Frameworks(2020)83 cited
- → Personalized DeepInf: Enhanced Social Influence Prediction with Deep Learning and Transfer Learning(2019)75 cited
- → DQRE-SCnet: A novel hybrid approach for selecting users in Federated Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering(2021)65 cited
- Fair Clustering for Diverse and Experienced Groups.(2020)