Surveillance, data-driven inferencing and the rule of law

To what extent should a government respond to citizen needs and deliver services, not by asking them their preferences, but by predicting their preferences through surveillance and data-driven inferencing? To what extent ought a government take advantage of risk assessment tools, social media analytics and public surveillance (with face-matching) to identify threats and respond accordingly? Is statistical accuracy the primary metric or are there deeper concerns with a state classifying people before deciding what services to deliver, what decisions to make, or what powers to exercise?
This talk will explore our ability to answer these questions by reference to rule of law thinking and values. It will also consider the Australian government’s apparent move away from the rule of law as a “red line” regarding electronic surveillance. How will the rule of law need to “adapt” to socio-technical change, as governments exploit AI tools to better understand, monitor and control their citizens?

The Judicial Function and the Rule of Law as Limiting Factors in the Use of AI for Legal Interpretation

The response necessitated by the COVID-19 pandemic has affected a cultural shift to the use of digital technology in the courtroom. This increased use of digital justice technologies invites consideration of just how far such usage can go before we come up against hard boundaries inherent in the judicial function and the rule of law.
It is at the point where AI and algorithmic decision-making seek to play a role in authoritative legal interpretation that such concerns become acute. While such technologies promise consistency and efficiency, such use is antithetical to our existing conceptions of the separation of powers in a modern liberal democracy. The revelation of this inconsistency is challenged by persistent myths of legal formalism and legalism, which continue to promise the possibility of legal objectivity. Confronting the limits of the proper use of technology demands that we be honest about the proper nature of judicial decision-making and the discursive nature of law itself.

Fears of Artificial Intelligence: The Exercise of Public Power by Algorithms and the Rule of Law

Many of the conceptions of the Rule of Law that we use to determine the meaning and content of the concept were authored at times of great political and societal unrest. Locke, Dicey, and Hayek’s ideas were written in response to substantial fears that they held in relation to the exercise of power. These fears—of the exercise of arbitrary royal or papal power, of the expansion of the administrative state, or of totalitarian central control—shaped the content of their conceptions.

In circumstances where their revolutionary formulations of Rule of Law ideas remain influential, it is no exaggeration to suggest that contemporary ideas about what the Rule of Law is are shaped by fear (real or perceived). In this paper, I consider how fears associated with the operation and implementation of artificial intelligence in the exercise of public power may shape future revolutionary conceptions of the Rule of Law.