Facebook attempted to legitimate its content moderation decisions by creating a quasi-judicial institution–the Facebook Oversight Board (FOB). In this essay, we argue that the FOB has so far failed to establish itself as a legitimate judicial institution due to an incomplete understanding of what grounds judicial legitimacy. Its decisions are rarely known or discussed in the communities they are meant to serve.
Facebook and the FOB intended to rely purely on the “professional autonomy” of law, depending on legal paraphernalia (courts, but also rule-based decision-making and reason-giving) to provide coercive decisions with a veneer of legitimacy. However, the FOB lacks another crucial element of judicial legitimacy: popular ownership. The decisions of the FOB are not perceived to be the “better self” of a community, as an emanation of its deeper values, perhaps because Facebook users, despite insistent propaganda, are not a community in any meaningful sense.
Speech regulation is a core interest of the state. People demand control over certain forms of speech. Rulers want to maximize control of unwanted speech at the lowest costs possible. Yet, sometimes, states refrain from controlling speech in ways that are most cost-effective from a control standpoint. Under certain conditions, because revenue production is much more intertwined with digital speech infrastructure than with traditional media, rulers regulate digital speech in ways that do not upset private investment and try to shift the risk of liability for digital speech harms from private enterprises to users.
This paper argues that Singapore is such a case and shows how courts are one of the main venues for controlling digital speech, through judicial proceedings that target and punish (civil and criminal sanctions) speakers. Rulers with few constraints may have all the power to control speech, but a reason to commit to not do so in ways that upset infrastructure owners.
This paper seeks to identify possible approaches to mitigate harms arising out of ‘non-traditional’ platforms and make them more legally accountable, and uses WhatsApp as an illustration. In India, 77% of these cases were caused by rumor-mongering on social media. WhatsApp messages contributed to 28% of these cases.
Specifically, the paper makes a two-fold argument to push for greater responsibility from Encrypted Messaging Applications (EMAs) as ‘non-traditional’ platforms. First, EMAs like WhatsApp and Telegram are not purely private channels of messaging since they also facilitate group/public conversations and social networking. Second, the discourse surrounding ‘information fiduciaries’ should be analyzed in the context of EMAs in order to understand its applicability to ‘non-traditional’ platforms. A fiduciary character may be extended to such platforms to enable user-friendly, responsible, and accountable platform design with respect to speech harms.
Establishment of the Facebook’s Oversight Board (the “Supreme Court”) was an admission that the social media giant bears responsibility for balancing the freedom of speech with other legally protected values. But why stop there? Why should we trust one company with enacting and enforcing the speech rules for all communities, diverse and geographically dispersed, using its platform? With a turn towards more competition and/through interoperability in online communications, we face a tremendous chance to democratize online moderation by allowing specific communities to work out the rules that work for them. The paper will discuss technical and legal challenges and opportunities, as well as make some recommendations.