The GDPR is considered a milestone in data protection and the link between new technologies and its compliance is a recurrent topic. This paper tackles two main obstacles to GDPR effectiveness: its outdatedness towards technological development and the restrictive interpretation in some Countries. The analysis focuses on the context of PA willing to use AI to categorize firms and plan inspections. Those crash against the heavy burden of compliance to an obsolete Regulation. This leads to PA putting in place defensive mechanisms detrimental to tech development and not truly justified by data protection. Besides, a lack of interlocution between collective actors results in not having a solid cohesive role in dialoguing with the DPA. The aim is to highlight such criticalities and propose a revision of a nowadays not-fit-for-purpose GDPR. This in view of softening the obstacles to enhancement of AI-supported regulatory enforcement, but always safeguarding the principle of data protection.
Where individuals interact with online platforms, it is often assumed that transparency is the answer to the complexity of technology and business models. Constant confrontation with black boxes leads to sense of disquiet and disempowered that is thought to be best tackled by opening them up for the individuals. Concerns over giving access to your data? More information about the scope of processing. Worried about algorithms? Once again, information about their internal logic is the solution. In this paper I explore the dark side of the idea of individual-level transparency. Digital services are so embedded in daily lives that constant attempts to control the data flows paradoxically leads to so called ‘digital resignation’. Focus on transparency only distracts from the fact that many problems created by power imbalances, like disinformation, manipulation, discrimination, would be best addressed at collective level by public authorities equipped with powerful accountability apparatus.
As a result of high-profile killings by police of unarmed Black men (such as in Ferguson, MO in 2014, and in Minneapolis, MN in 2020 leading to world-wide protests), the U.S. has urgently attempted to address distrust of the police by the public. One technological strategy has been to call for police to wear body worn cameras (BWCs). The earliest study of BWCs in the U.S. (by co-author Dr. Barak Ariel of the University of Cambridge) of the Rialto, CA police department found a significant reduction in the use of force by police and complaints against the police when they wore BWCs. With demands for greater transparency, more than 70 studies have attempted, with varying results, to determine the efficacy of BWCs. We propose to present our recent study (using a multi-method approach) of the Miami Beach, FL police department examining the impact of BWCs on police behavior and the use of BWC footage by prosecutors. This study has implications for democracies around the world.
Protest movements are gaining momentum across the world, with Extinction Rebellion, Black Lives Matter, and strong pro-democracy protests in Chile and Hong Kong taking centre stage. At the same time, many governments are increasing their surveillance capacities in the name of ‘protecting the public’ and ‘addressing emergencies’. In this paper, I focus on the ‘chilling effect’ of facial recognition technology (FRT) use in public spaces on the right to peaceful assembly and political protest. Pointing to the absence of oversight and accountability mechanisms on government use of FRT, I demonstrate how FRT has significantly strengthened state power. I draw attention to the crucial role of tech companies in assisting governments in public space surveillance and curtailing protests. The paper argues for hard human rights obligations to bind these companies and governments, to ensure that political movements and protests can flourish in the post-COVID-19 world.