‘Human oversight’ in the draft EU AI regulation. What, when and by whom?

How should we utilize AI technologies to the fullest without risking or causing harm to society and its citizens? With the AIA, the EU displays a hesitant position towards letting the development and performance of AI systems run ‘loose’ through a fairly high prevalence of restrictions to AI deployment, or safeguarding-requirements linked to their use. It recognizes ‘human centric’ design and use of AI systems as one of the key safeguarding principles for the fundamental rights of affected citizens. This is in part visible in Article 14 which requires all so-called ‘high-risk’ AI systems to be designed and developed so that they can be effectively overseen by natural persons during their use. However, large parts of the public sector’s use of AI systems may qualify as ‘high-risk’. This paper revisits the academic discussions on the need for human oversight in automated decision-making and analyses what type of oversight requirements the proposed Article 14 entails.