What Law, and When: Legally Contextualizing Algorithmic Discrimination and De-biasing Design

Algorithmic discrimination is when computer applications that are supposed to operate in way that is equal for all turn out to be biased. De-biasing software is designed to neutralize statistics that reproduce structural inequalities that help create algorithmic discrimination. However, it is unclear what kind of legal principles should determine the content of de-biasing software to alleviate legal responsibility for algorithmic discrimination. An algorithm may or may not be legally discriminatory depending on the legal and factual context. We argue that the frequent calls for purely ethical approaches to AI are misguided because there can be no catch-all solution to the problem of what, in a specific context, constitutes equality. This paper asks: what constitutes algorithmic discrimination and what are the duties for both public and private entities under EU and HR law surrounding the use of de-biasing techniques for those algorithms?

International Courts Help Diffuse Interests: Examples from Internet Regulation

Mancur Olson famously argued that in conflicts between the diffuse interests of most of society and small interest groups, small interest groups will triumph. But a new book by Gunnar Trumbull, “Strength in Numbers”, challenges Olson's theory. Trumbull argues that diffuse interests can succeed in affluent democracies because no major policy debate can be won in these democracies without recruiting at least two out of three policy actors: state policymakers, industry, and social activists. To cement a coalition between two such actors, a legitimating narrative isneeded. When they adjudicate information technology disputes, international courts, primarily the CJEU, promote a narrative of protecting the privacy of people and their right to accurate information. This judicial intervention can break the coalition between industry and state and instead form a new coalition between social activists and the state, giving diffuse interests a chance to win.

Judicial Review of Algorithmic Decision Making

In recent years, government bodies in technologically advanced countries increasingly rely on artificial intelligence and machine learning algorithms to form and implement public policies. These new technologies pose a serious threat to core principles of public administration, such as transparency and reason-giving. This paper explores the potential role of the judiciary in mitigating the accountability deficit created by the governmental use of algorithmic decision making. It discusses the challenges that courts face when they review automated or semi-automated governmental decisions, and examines the methods and strategies that courts can employ to address these challenges. Finally, the paper discusses the implications of these judicial strategies for both administrative and judicial legitimacy.