In May, France passed legislation compelling social media companies to remove ‘manifestly illicit’ hate speech within 24 hours. Companies which do not comply with this requirement would face fines of up to 1.25 million Euros., France’s Constitutional Council ruled that this law limited the freedom of expression in an unnecessary, inappropriate and disproportional manner. This law can be seen as a legislative precedent set by Germany and its 2017 Network Enforcement Act. The result has been increased pressure on private companies to swiftly remove hate speech (even at the cost of freedom of expression). In light of the above, this paper will conduct an analysis of the status of speech online, extrapolating on the horizontalization of responsibility vis-à-vis moderating online hate speech through the structural juxtaposition of private, profit making companies deciphering on issues of freedom of expression through, inter alia (yet increasingly) the use of Artificial Intelligence.
Europe is rapidly moving from primary reliance on criminal law for dealing with online ‘extreme speech’ towards use of notice-and-takedown procedures, as under the EU’s ‘voluntary’ Hate Speech Code and German NetzDG law. The scale of these operations dwarfs the small number of prosecutions in European states, while their lightning-quick operation (often within 24 hours) make the criminal justice system look impossibly ponderous. The new Regulation on tackling terrorism content moves the EU to a compulsory system, backed by heavy fines. This paper argues that the EU approach indiscriminately catches wholly different types of ‘terrorism-related material’ that in speech terms have wildly different value and gives only rhetorical protection to free speech. But it challenges the notion that the Regulation is simply a retrograde step in civil liberties terms, contending that transparency, due process, access to legal speech rights and possibilities for legal challenge may all be enhanced.
The paper traces the influence of the standard account of liberalism in human rights law. The standard account sits comfortably with a wide range of invasive and punitive private sanctions towards private actors: notably including no-platforming and indefinite exclusion from social media platforms. Influenced by this account, free expression guarantees focus squarely on protecting a private sphere of social and legal autonomy from state interference: what happens within that sphere is only of peripheral concern. This approach is deeply unsatisfactory, given the significant threats emanating from private platforms that shape the conditions under which individuals exercise free expression. Human rights law should take these platforms seriously as a source of threats, without abandoning the valuable differentiation of obligations between private actors and the state. The paper argues that private platforms have some direct obligations under freedom of expression towards private actors.