The Limits of the GDPR in the Personalisation Context and Beyond

In the EU, regulatory analysis of artificial intelligence in general and personalisation more specifically often starts with data protection law, more specifically the General Data Protection Regulation (GDPR). This is unsurprising due to the fact that training data often contains personal data and that the output of these systems can also take the form of personal data. There are, however, limits to data protection’s ability to function as a general AI law. My intervention highlights the importance of being realistic about the GDPR’s opportunities and limitations in this respect. It examines the application of certain elements of the GDPR to data-driven personalisation and highlights that whereas the Regulation indeed applies to the processing of personal data, it would be erroneous to frame it as a general ‘AI law’ capable of addressing all normative concerns around personalisation.

Contingencies of the “Brussels Effect” in the Digital Domain

The EU has been hailed as a global data regulator. European policymakers have embraced this “Brussels Effect” as the EU embarks on an ambitious new regulatory agenda to regulate the digital economy within Europe and beyond. But the extent to which EU law has shaped the digital domain globally has been overstated and cannot be taken for granted. After fighting vigorously against its adoption, companies now often claim to embrace the EU’s General Data Protection (GDPR) and to adhere to it globally. However, in practice, the GDPR’s enforcement record is mixed at best and companies’ assurances do not always hold up to scrutiny. The EU’s new regulatory proposals for a Data Governance Act (DGA), Digital Services Act (DSA), Digital Markets Act (DMA), Artificial Intelligence Act (AIA), and Data Act (DA) are unlikely to generate a wholesale Brussels Effect. Instead, companies will pick and choose if, when, and how to implement European data law globally.

Critical Reflections on the AI Act

At first glance, the EU's AI Act proposal looks like a dream-come-true. It contains obligations for keeping a human in the loop, ensuring data quality, designing risk-management plans; it even establishes a regulatory agency for algorithms! However, the AIA’s impact will be far from positive. As a neoliberal and technocratic law at its core, the AIA masks political decisions as technical ones, and effectively delegates these decisions to private consultancy- and law firms. It will generate tons of documentation that no one will ever read, without meaningfully changing the practice of the AI systems’ design and use. And given its many exceptions and loopholes, the AIA will not only fail to reach the AI systems used by big tech companies like Facebook, Google, or Amazon but also preempt the member states from doing so in the state legislation. The AIA is not only a bad law; it’s a dangerous one. An alternative approach will be suggested.

Discussant

Elettra Bietti's reflections will draw lessons from her genealogy of digital platform regulation for future European data regulation.