Platform regulation and the conceptual challenges of automation

Social media platforms bring together a set of issues related to the governance through algorithms. Platforms match users with news, services and applications. Automation is also an integral aspect of content moderation practices: it is used to proactively remove violent content as well as to flag potential harmful materials to moderators. Platforms’ policies and their enforcement mechanisms raise questions on how to regulate such platforms: how much law is needed (e.g. self-regulation, co-regulation or hard law) and which consequences different regulatory approaches would have in terms of rights, an understanding of law and the shape of technology. This paper discusses the latest developments within the EU framework. It focuses on the construction of platforms as “passive and neutral”, the way these terms are employed in the case law of the CJEU and their impact on our traditionally legal framework based on the rule of law and procedural guarantees

Thinking inside the Box: The Promise and Boundaries of Transparency in Automated Decision-Making

Growing evidence suggests that human bias cannot be erased in automated decision making, at least for now. It is not clear, who is accountable. This is often referred to as ‘the black box problem’: we cannot be sure how the inputs transform into outputs. Transparency is often proposed as a solution. The call for transparency features in various AI ethics codes as well as in the EU’s GDPR. Although transparency can be approached in many of ways, its basic idea is simple. It promises legitimacy by making an object or behavior visible and, as such, controllable. In my presentation, I argue that transparency cannot solve the black box problem in ADM: transparency is a more complex an ideal that is portrayed in mainstream narratives. Transparency is inherently performative and cannot but be. This performativity goes counter the promise of unmediated visibility, vested in transparency. As I will show, in ADM, transparency’s peculiarities will come visible in a new way.

Rule of Law 'By Design'?

Can technology promote the rule of law? The idea of achieving legal objectives through technology ‘by design’ is not new but has been vividly revived in debates on systems such as blockchain or technologies to ‘modernise’ elections, with claims of improved transparency and reduced human error and fraud. Panoptic governance mechanisms such as China’s Social Credit System promise a perfectly predictable, consistent, and equal enforcement of the law. Technology is increasingly presented as fostering rule of law values – a rule of law ‘by design’. This paper asks whether technological solutions that embed rule of law values do in fact promote the rule of law. Using case studies of public administration and blockchain; running of elections and voting technologies; and law enforcement in the Chinese Social Credit system, I explore the extent to which the promise of technology to promote the rule of law hold up in practice and what they mean for the idea of a society ruled by law

The (false?) promise of human control over machines

Human oversight is advocated in AI policy as a solution to the problems of algorithmic decision making (ADM) for automating decision making. Human oversight promises control over algorithms that can be translated into procedural safeguards in ADM design. Yet I argue that human oversight builds on a false assumption of separability of human and machine action, which leads to misconceptions about its potential and limitations in producing control. A more suitable framing can be found in hybridisation of decision making in complex socio-technical systems, where human work is mediated through ADM systems. Focus on hybridization opens up questions on meaningful human-computer partnerships and interface design. Human oversight reflects broader societal fears about automation. Ultimately, human control reveals a connection between human agency, legitimacy of decision making and social expectations of fairness that enables new critical analysis of law’s anthropocentricity.

Collective redress and digital power

How do you deal with structural injustice that operates in digital environments through practices of profiling, surveillance and control over your online identity?
Data-intense platforms construct a space where our data twins and avatars operate. Interaction between the digital and the analogical world has created a space for a new kind of structural oppression, not yet recognized by the law. Consequently, too many injustices are left without a legal fora and means to contest injustices publicly.

I claim that individualistic legal procedure has come to its end. Instead, collective redress would provide a counter power to address structural abuses in digital environments. Latest documents, in which the EU is setting its future agenda to strengthen collective redress only underline this claim. I argue that we need a new way of interpreting collective harm and a renewed way of categorizing collectives.