In the digital age, advances in technology have profoundly altered how governments communicate with the public. As a result, the regulation of online speech is increasingly important to how the public accesses information, including information from state actors that shapes public opinion regarding democratic institutions. The digital age provides new tools for state actors to magnify their voices in unprecedented ways.
This paper addresses the dangers to democracy flowing from disinformation from state actors in the digital age. The insurrection at the US Capitol in 2021 provides a stark example of this problem. The objective of this paper is to tackle the question of whether reforms may be made in the US to limit the government’s capacity to disseminate disinformation without contravening the First Amendment. It is informed by the principle espoused by the US Supreme Court that a primary function of free speech is to strengthen democracy by facilitating an informed populace.
The European Union has launched an important regulatory enterprise aiming to improve the conditions of data sharing and re-use in the private and public sectors. The Data Governance Act and the Data Act are critical regulatory building blocks aiming to mainly improve the conditions for the data economy. They should also contribute to improving public services efficiency. This paper contributes to the discussion on new models of data governance. Focusing on the public sector, we analyse the preparatory documents in the legislative process of both proposals and build a conceptual framework to clarify the different envisioned regulatory solutions that were debated for improving data sharing, both from a Government to Business and Business to Government perspective. Such analysis will be put in perspective with the initial Open Government Data model. It will also assess the strengthening of European democratic values and fundamental rights, which was already promoted in the OGD model.
Literature on platform governance has identified that through their content moderation practices, social media platforms develop normative orders by performing quasi-legislative and quasi-judicial functions. Among platforms, Meta has developed the most complex system of rules, called Community Standards. Identifying that the literature lacks in-depth analysis of the content of these rules, our article aims at examining the Community Standards of Meta under the lens of regular legislative drafting. Using qualitative coding methods, we develop a taxonomy based on constitutional and legislative concepts. Our hypothesis is that contrary to the laws adopted by the state, the Community Standards are primarily written to be implemented by algorithms and not to guarantee legal security for users. With this analysis, we aim to contribute to the discussion on the effects of private norms on the respect of human rights globally and their relationship with democratic accountability.
The role of artificial intelligence in the field of migration is gaining increasing relevance worldwide, especially in the development of “smart borders”. That concept includes the use of different technologies with the scope to implement a more effective and planned management of incoming migration flows. By adopting a constitutional perspective, this paper focuses on the challenges that automated decisions at borders poses for the rights of asylum seekers in the context of the European Union. Particular attention will be paid to the control system called iBorderCtrl, recently experimented along some European borders under strong and unauthorized migration pressure. This algorithm aims at detecting potential deceptions of migrants on the basis of facial recognition technology and the measurement of micro-expressions. The range of potentialities and risks will be thus analyzed taking into account the lack of a human oversight during a “life changing” phase for the individuals involved.