EU law states “the obligation of the administration to give reasons for its decisions”. This echoes a rule of law principle common to many constitutional traditions and conditional to fundamental rights related to powers interfering with personal freedom (for example, as concretised in the right of access, judicial review, nondiscrimination, self-determination). The increasing trend of asking algorithms to take decisions affecting personal rights (either in the private or public sphere) is deeply changing the legal reasoning on the limitation of powers. This paper would like to explore the existing global constitutional law principles on algorithmic decision-making. It inquires whether these principles are effective in tackling new challenges coming from deep-learning algorithms where “causation” is replaced by “correlation”, therefore when we use no humanly comprehensible reason for decisions.
Robot/AI technologies raise several questions regarding the tenets of modern legal theory. One of the challenges coming from them concern the legal conceptualization of overlapping/competing forms of regulation, with different sources of legitimacy, procedures and institutions: 1) ‘hard’ regulation of politically legitimated (i.e. public) actors; 2) ‘soft’ regulation of private and hybrid actors. Contrary to other fields (e.g. corporate codes of conducts), the interface of these forms of regulation in the AI/robotics field have hardly been studied through the lenses of legal pluralism. Based on institutionalist and systems theory approaches, this paper aims to fill this gap, and argues that standardization and ‘soft’ regulation processes in the AI/robotics field are increasingly building proper legal systems and that the related interrelations/clashes with politically legitimated law should be managed through conflict-of-laws approaches.
The principle of good administration requires public authorities to carefully prepare their decisions, be transparent and accountable, offer access to information, and be able to explain their decisions.The growing use of algorithms as a supporting or decisive tool for administrative decision-making is nonetheless changing the relationship between citizens and the state.This is often explained by two elements: (i) algorithms are “black boxes”; (ii) the underlying technology is provided by private tech companies that protect the disclosure of algorithms with trade secrets, determine to a certain extent the content of public services, and are likely to influence how public values are protected.Drawing on existing case law from different jurisdictions, this paper explores the meaning of the principle of good administration in the algorithmic state. It inquires into the need for new principles of good administration that enhance the transparency and ethics of algorithmic decision making.
In what way is the term “cyber” distinct from anything digital? And how should we think about the role of the state in “cyber”? This paper puts forward a provisional definition for “cyber”, and proceeds to analyze the three distinct roles the state plays in the cybernetic domain: user, superuser and regulator. It then proceeds to focus on one role – the regulator – and outlines the different axes through which cyber may be regulated. Each axis raises its own challenges (within a given jurisdiction as well as transnationally), but offers some opportunities. The paper concludes with highlighting the importance of maintaining alignment between these axes, in order to avoid unintended interference.