Fairness in Automated Predictions and Decisions

Machine learning classifiers are increasingly used to inform, or even directly to make, decisions having the potential to significantly affect human lives. Fairness concerns have spawned a number of contributions aimed at both identifying and avoiding unfairness in algorithmic decisions, in particular with regard to different groups. Many of these contributions have focused on how the accuracy of a predictive system is affected, by input data that do not reflect the statistical composition of the population and the corresponding base rates. Other scholars have rather focused on statistical differences in classifications concerning different groups, independently of how such differences are generated. The presentation addresses the second kind of concerns, and critically discuss those “fairness metrics”, which require the equalisation of statistics between groups (e.g., demographic parity, equality of opportunity, treatment equality).