In the past decades, security bureaucracies and the like – ranging from police and intelligence services to the military and technology companies – have harnessed artificial intelligence to frame “solutions” to security problems. Conducting counter-terrorism, cyber-security or a military operation today increasingly involves AI driven modes of surveillance and control, e.g. machine learning, crowd behaviour analysis, pattern detection and data collection. This shift in identifying patterns, detecting “anomalies” online, and profiling “suspicious” individuals has accelerated the same rationale underpinning security actions: anticipating and predicting the future. Who designs and commands algorithms? Software companies, programmers, developers – and their end users, as data analysts, intelligence units. If AI driven tools are portrayed as self-automated functions, the formulas, codes and resulting algorithms behind AI are determined by the practices of their producers and users as well.
This paper focuses on the AI, automated decision-making processes in surveillance operations of law enforcement and intelligence agencies, and the scope of the national security exception in EU law within which their operations may fall. The sophisticated AI-based automated processes may be implemented to detect ‘suspicious’ behaviour in security practice. The paper thus examines how AI fits into automated decision making, the limitations on automated decision making (GDPR and Convention 108+) and the CJEU's approach (most importantly in Opinion 1/15) in order to assess the legal constraints of automated surveillance operations under EU law.
This paper provides a human rights law analysis of the new EU Regulation on Terrorist Content Online, as a prime example of how AI specifically, and developments within the digital realm generally, trigger legislative responses that pose challenges to the concepts and doctrines of international human rights law, and result in new and evolving threats to human rights. For example, the Regulation results in cross-border detection and removal of terrorist content which content itself is intended as having extraterritorial reach and reflects the phenomenon of individuals’ online identities genuinely being non-territorial or multi-locational in nature. The Regulation, through its multiple one-hour deadlines and as seeking to respond to dynamic content as live broadcasting of an ongoing terrorist event, will by default result in both governments and private companies (service providers) relying on AI in detecting and removing terrorist content, and redefining freedom of expression.
This paper focuses on facial recognition systems (FRSs) in the security field. FRSs capture biometric data and process it to compare it with existing data stored in databases. In security practice, FRSs are used to scan people, e.g. in crowds or during large events, and check whether their biometric features match with data, of the same kind, of persons suspected of terrorism or other serious crimes. A comparative overview shows that many countries are using, or at least trialling, these systems.
Against this background, some concerns arising from a public law perspective are examined, i.e.: 1) whether the principle of non-discrimination is respected by FRSs; 2) to what extent the principle of transparent decision-making is fulfilled; 3) whether and how privacy rights are guaranteed.
The research claims that the abovementioned safeguards are often violated and points out the need for a thoughtful and comprehensive supranational framework regulating FRSs, which is lacking at present.