‘Human oversight’ in the draft EU AI regulation. What, when and by whom?

How should we utilize AI technologies to the fullest without risking or causing harm to society and its citizens? With the AIA, the EU displays a hesitant position towards letting the development and performance of AI systems run ‘loose’ through a fairly high prevalence of restrictions to AI deployment, or safeguarding-requirements linked to their use. It recognizes ‘human centric’ design and use of AI systems as one of the key safeguarding principles for the fundamental rights of affected citizens. This is in part visible in Article 14 which requires all so-called ‘high-risk’ AI systems to be designed and developed so that they can be effectively overseen by natural persons during their use. However, large parts of the public sector’s use of AI systems may qualify as ‘high-risk’. This paper revisits the academic discussions on the need for human oversight in automated decision-making and analyses what type of oversight requirements the proposed Article 14 entails.

Proportionality Principle for the Ethics of Artificial Intelligence

This presentation explores the principle of proportionality as a possible solution to unresolved problems pertaining to the tensions among principles in various ethical frameworks for AI. Conceptual and procedural divergences in the sets of principles reveal uncertainty as to which ethical principles should be prioritized and how conflicts between them should be resolved. Moreover, there are externalities of employing the currently dominant AI methods, in particular for the environment. The principle of proportionality and a framework of tests of necessity, desirability, and suitability can address some of the underlying issues and to ensure that other societal priorities are well taken into account. It is argued that at least in certain scenarios the perceived tensions can be false dichotomies. Proportionality presents a set of conditions to satisfy to justify usage of certain AI methods, which can be further expanded to justifying using AI systems as such for a particular purpose.

Against Transparency

Society and many legal scholars regard transparency as a universal solution to the opacity of AI systems. Sunlight is viewed as “the best disinfectant” for systems enveloped in secrecy and thus a trust-enhancing mechanism that needs to be promoted. Drawing on multidisciplinary literature on the legal principle of transparency, management of visibilities, explainable and accountable AI, this paper argues against these traditional perceptions of transparency in automated systems. Instead, it posits that transparency obligations should be rethought in order to protect the rule of law meaningfully. This new perspective is needed for several reasons. For example, while some tech-savvy stakeholders can easily “game” the system; others overwhelm legal actors with immense amounts of information. In both cases, there is sufficient transparency in the sense of information disclosure but the exercise of procedural rights may be rendered meaningless.

The Regulation of Cybersecurity of Autonomous Vehicles using a Law and Economics approach

Autonomous Vehicles promise considerable social benefits but they are still fraught with risks including their vulnerability to cyber-attacks. This AI application lacks a suitable legal and cybersecurity framework. This paper inquires into the optimal regulatory response to the cyber-security risks associated with automated vehicles. It adopts a Law and Economics approach, more specifically the theory of optimal enforcement. Instead of only focusing on the regulation of technical deficiencies, this paper argues that many security systems implemented in automated vehicles fail because of the design of individual decision-making incentives given to different actors. This paper employs the theory of optimal enforcement to examine which enforcement mechanisms (private enforcement through liability or public enforcement through administrative and criminal enforcement) will provide the most suitable incentive structures towards achieving a high level of security in automated vehicles.