This paper considers simple issues rendered complex by the automation of decisions traditionally made by human government officers. The paper will explore three fundamental questions about an automated decision: what is the “decision”; who makes it; and when is it made? The answers to these questions have consequences for whether such decisions are amenable to review by Australian administrative law. The paper will consider recent case law regarding what constitutes a “decision”; and the role of concepts of “officer of the Commonwealth” and a “decision” in grounding the jurisdiction of courts to engage in judicial review of government action.
This paper examines how the various, traditional ‘grounds’ of review—or categories of legal errors that decision-makers commonly make—are affected by the automation of administrative decision-making. It argues that automation has the potential to significantly decrease, and even eliminate, several common kinds of legal error that human decision-makers make. However, the recent Australian examples show that, without careful human oversight in the design process, the rate of certain kinds of errors—particularly those associated with the fairness of decisions—may dramatically increase; and other legal errors may become impossible for applicants to prove in judicial review proceedings.
This paper explores the implications of automaton for procedural fairness and the ways in which procedural fairness can be developed to deal with technological developments. Automation raises concerns for this administrative law area because it is not clear whether decision-makers in charge of automated systems will be able to sufficiently understand the highly technical information which have been used in an automated decision and to communicate that to the affected person (the ‘explainability’ problem). Another debated area is how to draw the boundaries of automation. Will there be some decisions where there needs to be a ‘human in the loop’? Further, given that provision of an oral hearing is a central component of the procedural fairness principles in Australian law, how does that interact with automation? Can a machine give an applicant a ‘hearing’?