The world is experiencing intense dilemmas in regulating hate speech and online harassment. The paper engages with the normative dimensions of the balance between the need to control and limit incitement to violence with the fundamental right to freedom of expression. Three distinct aspects of hate speech are covered: the first relates to the role of freedom of expression as a tool of inclusiveness. Legal systems are torn between criminalising the speaker’s motive alone or in conjunction with the effects of the speech. The second aspect looks at the legal challenges of regulating freedom of expression with emphasis on its online dimensions. The final aspect of the paper proposes an actor-based analysis of hate speech, as it emerges from the current regulatory frameworks. It deals with the role of the State but also with that of equality bodies, political parties and private businesses in providing more efficient networks of protection of minorities from violent expressions of hatred.
Europe is rapidly moving away from primary reliance on the criminal law for dealing with online ‘extreme speech’ towards use of notice-and-takedown procedures, as under the EU’s ‘voluntary’ Hate Speech Code and the German NetzDG law. The scale of these operations dwarfs the small number of prosecutions in European states, while their lightning-quick operation (often within 24 hours) make the traditional criminal justice system look impossibly ponderous. The new Regulation on tackling terrorism content online will move the EU to a compulsory system, backed by heavy fines. This paper argues that the EU approach indiscriminately lumps together wholly different types of ‘terrorism-related material’ that, in free speech terms, have wildly different value. But it challenges the notion that the Regulation is simply a retrograde step in civil liberties terms, contending that transparency, due process, access to legal speech rights and possibilities for legal challenge will all be enhanced.
This paper considers whether it is permissible for liberal democracies to ban the public expression of extremist viewpoints that encourage terrorism in order to protect the security of persons. The point of departure of my discussion is three security arguments for prohibitions of advocacy of terrorism on the Internet and other fora of public discourse. The core idea of security arguments is that it is permissible for legislators or majorities to enact certain viewpoint-based restrictions on extremist speech – the aim of which is to protect the security of persons or prevent violations of what Alexander Brown calls people’s ‘right to a sense of personal security’. In contrast to Brown and other defenders of viewpoint-based restrictions, I will argue that free public discourse on the Internet and in other fora requires viewpoint neutrality. This means that all persons – including extremists – should have a basic right to express, hear and consider any viewpoint within public discourse.
Given that ‘democracy is, above all, a process of forming opinion’ (Hayek), tech companies’ decisions about regulating expression arguably affect citizens’ relationship to a democratic legal system. Typically, online platforms are aware of their importance to public debate. For example, Facebook’s former VP for public policy Richard Allan has stated that ‘People might disagree about the wisdom of a country’s foreign policy or the morality of certain religious teachings, and we want them to be able to debate those issues on Facebook.’ Nonetheless, these companies limit certain types of expression. Facebook, for example, bans ‘hate speech’ from its platform. Controversy persists over line-drawing between allowed and prohibited content by tech companies. This presentation focuses on the challenges of contextualization and even-handedness in online content moderation, and considers these challenges in terms of democratic theory.