Facial recognition technology (FRT) is fast becoming the tool of choice for law enforcement agencies looking to police public space. Over the last year, in England and Wales, FRT has been used at a number of crowded events to identify suspects and prevent crime. This technology is more intrusive than ordinary CCTV surveillance as it can identify individuals in real space and link them to other information stored on police databases. This paper argues that the legal framework in England and Wales is ill equipped to recognise, and afford adequate protection to, the privacy rights of those subject to police FRT surveillance in public space. Drawing on philosophical literature and relevant European human rights jurisprudence, a new approach to privacy in public space is proposed, which moves beyond the “reasonable expectation of privacy” standard to focus on a more holistic inquiry of how privacy interests are engaged in this context.
After the Snowden revelations, Britain's Investigatory Powers Tribunal (IPT), an administrative panel dealing with illegal interception of communication, heard a series of complaints from NGOs. The Tribunal sat in public, treating the complaints as hypothetical scenarios: so-called 'assumed facts'. The assumed facts enabled legal argument to proceed while protecting government secrecy. It determined in two key cases that the hypothetical practices complained of were lawful; but only because the government disclosed policies and codes of conduct previously held 'below the waterline' of secrecy. The disclosures served to make previously unforeseeable practices 'foreseeable', and thus lawful. The pattern reveals an implicitly-assumed capacity of administrative law to serve as a publicity device, presumed to communicate something to the public. This, paradoxically, suggests that the extremely detailed Investigatory Powers Act 2016 does not enhance transparency; it protects secrecy.
Liberal democracies are increasingly exposed to external and internal “threats”. The reaction tends to be to limit freedom in pursuit of security. Liberal democracies risk to sacrifice the very pillars that define them – democracy, individual liberties, social tolerance – to purportedly safeguard themselves. How come? The paper argues that common biases, wrong probability calculations, and cognitive dissonances by the public as well as by governmental decision-makers are a key explanation for this paradox. To advance this argument, the paper centers around three questions: first, analyzing the biases prevalent in the discourse (such as stereotyping, selective perception, salience and availability bias, endowment effect and loss aversion, framing). Second, exploring how political leaders take advantage of common biases and heuristics. And third, exploring possible strategies of de-biasing as well as improving the “risk literacy” (Gerd Gigerenzer) of decision makers and the public.
This presentation explores the challenges of digital constitutionalism in practice through a case study examining how concepts of privacy and security have been framed and contested in Australian cyber security and telecommunications policy-making over the last decade. We seek to understand if, and how, principles of digital constitutionalism have been incorporated at the national level. Our analysis suggests a fundamental challenge for the project of digital constitutionalism in developing and implementing principles that have practical or legally binding impact on domestic telecommunications and cyber security policy. We show that despite Australia's high-level commitments to privacy through membership of the Freedom Online Coalition, individual rights are routinely discounted against collective rights to security. We conclude by arguing that, at least in Australia, the domestic conditions limit the practical application and enforcement of digital constitutionalism’s norms.