Mammoth internet platforms, like Facebook and Google, regulate the behavior of billions of individuals. These companies set and enforce norms through user agreements, design, and algorithmic regulation. But users do not actively and consciously participate in these regulatory processes, which penetrate deeply into users’ lives. Public law is well equipped to deal with state sovereign power. Yet this body of laws finds itself helpless when facing platforms’ emerging political authority. This article analyzes the quasi-sovereign power of platforms, the threats it creates to users' political freedom and to the public sphere, and the theoretical trap that makes it difficult to address these risks. To resolve this bind, the article proposes the creation of legal structures that will allow users to self-manage via representative unions. The integration of users’ collective voice within the corporate governance of platforms could help to respect, protect and fulfill users’ political rights.
Facebook’s creation of an “oversight board” is the culmination of a protracted debate about Facebook’s role as a global speech regulator. The “Supreme Court of Facebook” will adjudicate Facebook’s “community guidelines” in light of human rights norms protecting free expression, thereby entangling a self-created platform standard with established (if contested) standards of legality under international (human rights) law. But Facebook’s oversight board bylaws limit its jurisdiction from the outset to cases which are not legally determined by “the law” – a determination to be made by Facebook’s internal legal department. This significantly curtails the oversight board’s ambit and power. It is also a missed opportunity for international human rights law. Facebook’s institutional design choices reveal a “flawed dualism” that treats the multiple legalities of online content as neatly separated instead of taking seriously their entanglement and potential for deliberative contestation.
Complexity of market data is the new opacity of financial markets. Modern algorithmic financial markets process and produce vast amounts of data. The transition from human trust-based to algorithmic data-driven financial markets led regulators to put in place new market data sharing initiatives to collect, compile, and reconstruct market data from the financial system. Sharing market data is caring about the level-playing field between market participants. But sharing data is also scaring market participants because different datasets, statistical models, and parameter choices can lead to diverging results and cause regulatory uncertainty. Regulation of algorithmic financial markets therefore requires manufacturing consent on the interpretation of market data. The consent is manufactured through the use of consultations, communication channels between market participants and regulators, data sharing between regulators, but also through private regulatory initiatives.