Responses to the threat of disinformation in the EU: Implications of the closure of RT and Sputnik

The EU Council has recently taken the unprecedented step of banning the broadcasting activities of Russia Today (RT) and Sputnik to halt their attempts to spread “disinformation” about the Russian invasion of Ukraine. In this paper, we analyse this development both in its own right and in the light of other recent disinformation campaigns and responses thereto, and we set out some key questions that it raises about communication rights. First, we review a few relevant past instances of closure or blocking of media or social networks. Second, we discuss whether these kinds of restrictions are compatible with the freedoms provided to media and journalists in the free world by constitutions and international declarations and conventions. Third, we critically examine the nature of RT and Sputnik, characterizing them as pseudomedia. We put this finding in the context of a study we conducted within the SN-Disorders project, in which we applied ethical and journalistic criteria to 50 websites and digital media outlets, some of which were involved in the disinformation campaigns surrounding Catalonia’s illegal referendum of 2017 and the Brexit referendum of 2016.

Digital Constitutionalism and Online Content Moderation: Moving Beyond Discourse

This paper critically engages with the legal literature on ‘digital constitutionalism’, understood as the academic and political project of articulating constitutional values (fundamental rights, rule of law, checks and balances, democracy) in the operations and practices of the Internet. In the tradition of legal pluralism and systems theory, it argues that the project has explanatory power in the digital age. Building on the Teubnerian account, the paper presents two propositions: 1) it identifies private regimes in cyberspace that are producing norms that perform constitutional functions (such as self-restraint) and/or involve constitutional arenas, processes, or structures; and 2) offers a case study in social media platforms regarding content moderation to exemplify the emergence of these transnational constitutional norms. The paper concludes with some final thoughts on the need to bring more specificity and substance to the conceptual apparatus of ‘digital constitutionalism’.

Due process of law and fundamental rights protection in the era of “Big Techs’ justice”: the case of Facebook’ s Oversight Board

In recent times, the establishment of the so-called Facebook Court has constituted a pivotal turning point in global constitutionalism. Since its creation, it has been widely debated whether this para-jurisdictional institution could equally ensure effective protection of rights. In fact, several rules of the Oversight Board Charter do not seem to adhere to essential requirements of due process. Starting from these assumptions, the paper aspires to critically examine the Board’s internal physiognomy, by comparing its basic law and its concrete application to those paramount constitutional values resulting from centuries of juridical civilization. Particular attention will be paid to the principles of legal certainty and effectiveness. Indeed, the absence of a clear and predetermined set of rules (Art. 2 par. 2 OBC) and the Court’s discretion in selecting the requests (art. 2 par. 1 OBC) appear to contribute to consolidating a model of fundamental rights protection “à la carte”.

Internet service providers as law enforcers and adjudicators. A public role of private actors?

Private actors have become increasingly involved in the law enforcement process in recent years, taking up more proactive roles and being increasingly engaged in choices between conflicting rights and freedoms. The development and spread of information and communication technology (ICT) created a set of conditions in which the participation of these private actors appears to be a necessity. While executing these roles they may be compelled – de jure or de facto – to make value judgments which traditionally belong to the public authorities. However, the legal framework is either lacking or it does not fully cover the consequences of this fundamental paradigm shift, to the detriment of the authorities, private actors and persons concerned.

The objective of this paper is to examine the resulting key legal problems of these developments. The author argues that the current legislative initiatives are insufficient and a deeper rethinking of the role of private actors is necessary.

The short history of online content moderation in Europe – issue of hate speech

The aim of this paper is to discuss how the boundaries of the online public discourse are being delineated overtime especially with reference to online hate speech. The social media companies govern the speech published via their services pursuant to sui generis internal policies. Such policies seem to align perfectly neither with the US not with European standards for what constitutes the legitimate expression. Rather, internet platforms become governors of the new online speech existing at the nexus of social norms, legal obligations and companies’ financial interests and underwent significant changes from the moment of the inception of social media as such. For example, lately Facebook changed its guidelines to allow call to violence against the Russian soldiers. Thus, the question arises how different forces might shape the online public discourse and how this sui generis system for what content makes legitimate part of online discourse compares to that in place in EU.