Fair Governance with Humans and Machines

How fair are algorithm-assisted government decisions? Using a set of vignettes in the context of predictive policing, school admissions and refugee relocation, we explore how different degrees of human control affect fairness perceptions and procedural preferences. We implement four treatments varying the extent of responsibility delegation to the machine and the degree of human control over the decision, ranging from full human discretion, machine-based predictions with high or low human control, to fully machine-based decisions. We find that machine-based predictions with high human control yield the highest and fully machine-based decisions the lowest fairness score. These differences can partly be explained through different accuracy assessments. Fairness scores follow a similar pattern across contexts, with a negative level effect in the predictive policing context. Our results shed light on the behavioral foundations of several legal human-in-the-loop rules.