In the past decades, security bureaucracies and the like – ranging from police and intelligence services to the military and technology companies – have harnessed artificial intelligence to frame “solutions” to security problems. Conducting counter-terrorism, cyber-security or a military operation today increasingly involves AI driven modes of surveillance and control, e.g. machine learning, crowd behaviour analysis, pattern detection and data collection. This shift in identifying patterns, detecting “anomalies” online, and profiling “suspicious” individuals has accelerated the same rationale underpinning security actions: anticipating and predicting the future. Who designs and commands algorithms? Software companies, programmers, developers – and their end users, as data analysts, intelligence units. If AI driven tools are portrayed as self-automated functions, the formulas, codes and resulting algorithms behind AI are determined by the practices of their producers and users as well.