Human rights impact assessment for algorithmic systems? An idea to be implemented for future AI governance in Europe

The proposed AI Act contains a new ‘risk-based approach’ to classify different AI systems. However, even if the proposal aims to be “human-centric,” it lacks a real focus on fundamental rights and freedoms, particularly when dealing with ‘high risks’ technologies, such as biometrics. The question is then on how to implement a ‘human rights impact assessment’ for algorithmic systems as a path for future AI governance.

Regulating AI: Models between law and technology

The regulation of AI is still a wide debated topic. The different models are shaped by different forms of technological nuances and actors involved. Therefore, it is critical to understand the intersection between law and technology in the regulation of AI, also to underline the constitutional challenges raised by the increasing spread of these systems across sectors and their social impact.

The European strategy to fight disinformation: Procedural safeguards and co-regulation

Disinformation is still raising constitutional questions. The primary point is not just the protection of free speech but also the responsibilities of all the actors involved in dissemination false content. In Europe, there is a large attention of how to address this issue. Both the new code of practice of disinformation and the adoption of the Digital Services Act are clear examples of a new strategy which considers procedure and co-regulation as the ways to address the challenges raised by disinformation in the algorithmic society.

Content moderation between public and private actors

Content moderation is not just a matter of social media governance. Public actors are increasingly involved in this process, for instance, by ordering the removal of content. This collaboration raises challenges for transparency and the rule of law making the boundaries between public and private actors closer. The primary question is how to address the challenges raised by the increasing involvement of public actors in content moderation.