This paper will delve into a project initiated by the Estonian government which in addition to the concrete elaboration of AI applications in the Estonian administration, sought to evaluate their legality. Within this framework, this project examined from a legal perspective which legal regulations (including data protection, but also the EU's proposal for an AI Act) must be observed in the use of AI today and which one’s may have to be taken into account in the future. In addition, this project also scrutinized whether Estonian regulations should be supplemented and amended. This paper will provide an overview of the current situation regarding the use of AI application in the Estonian public administration and the proposals made in this context.
Governments face considerable budgetary constraints and a growing complexity of daily live. It should therefore not come as a surprise that they are embracing modern technology to cope with these challenges. Governments administrations are increasingly using algorithmic systems to inform, assist or make decisions. This trend is unlikely to stop as such systems promise to make decision-making both more effective and efficient. Nevertheless, the use of algorithmic systems is not the panacea. In fact, if a system is designed to inform, advice or make individual decision-making, it means that it will embed a series of rules to process the data and reach an outcome. Or in case of decisions, as those rules are supposed to apply to the generality of the individuals for which individual decisions will be made, the question rises if those rules may have a regulatory character. This question and the problems that come with it will be explored.
This paper aims to compare the legal treatment of administrative mistakes in digital tax procedures in France, Italy, and The Netherlands. It defines administrative mistakes as non-intentional errors or oversights made by citizens when applying for public services and fulfilling their duties before government (e.g., pay taxes). Administrative mistakes often result from a combination of socioeconomic and cultural factors rather than from the intent to commit tax fraud. In an attempt to improve the trust of citizens in digital governments, some European legislators have designed new rights and tools to take into account citizens’ good faith in the context of digital government. An example is the French “right to make a mistake” which allows citizens to commit “one administrative mistake in good faith in their lives” without being sanctioned. This article compares this perspective to other less systematic approaches to mistakes in Italy and The Netherlands.
This contribution will present the project DigiLaw, funded by Nordic Council of Ministers. This project aims to clarify the legal framework for the digitization of public administrations within the Nordic-Baltic region. It focuses on the constitutional aspects and human rights to be taken into account when digitizing and automating public services. The proposed EU AI Regulation and its implications for these services will be taken into account. This project's contribution is twofold: First, it inquires whether existing national administrative law systems sufficiently ensure compliance with the constitutional and human rights obligations. Second, it discusses whether a Nordic-Baltic convention might be a relevant tool to ensure sufficient uniformity in relation to the legal framework, thereby ensuring the opportunities for increased co-operation within public digitalization between the Nordic-Baltic regions.