Is algorithmic citizenship a depersonalized citizenship?

Scholarship debate maintains that through technology advances in data processing we can personalize law in order to produce efficiency in its application. This takes place by tailoring legal rules to individual behaviors and features and by adopting micro-directives aligned to each individual. Micro-directives ensure a good grade of law enforcement. But when personalization occurs in the context of citizenship, implications are different. I argue that citizenship should not be personalized for two reasons. Firstly, for the risk of altering the meaning of citizenship to such an extent that it does no longer imply any state commitment. Secondly, for the risk of depersonalizing citizenship by detaching the real individual from his/her persona, ie the character played in a social context. Indeed, through the processing of individual data and behaviors, what we reconstruct is the individual’s virtual mask.

Rated by the algorithm: suggestions from the Chinese Social Credit System

Algorithmic analysis is the technical tool enabling citizens’ rating systems to function. The biggest one in terms of people involved and data gathered is the Chinese Social Credit System. Though still in a preliminary stage, the SCS has been tested in some municipalities with different success. The SCS starts from the awareness of state deficiencies (low rate of enforcement of law; financial scams; corruption; tax evasion) and the need to overcome them in the context of a market economy. In order to do so, rating and ranking citizens into different classes allows government and private sector to assess their trustworthiness and compliance to socially acceptable behaviour. Connected to one’s score is a specific shade of citizenship, a different extent of enjoyment of rights and freedoms and access to services and benefits. Despite the SCS allows control and fosters political conformity, it’s not its primary target. Indeed, it fits the Chinese traditional favour for social conformity.

Astroturfing, computational propaganda and the case for “digital disarmament”

During the Conference for the Reduction and Limitation of Armaments of 1932, a motion for ‘moral disarmament’ emerged, calling for States to cease ‘bellicose or aggressive propaganda’. Ever since, the legal notion of propaganda has remained confined to war and hatred; means for propaganda have instead changed and so have the risks connected to it. Current communicative practices based on combined use of algorithms, automation and human curation are widely understood to destabilise democracies and foster hostile narratives at the global level. The paper seeks to reframe the notions of propaganda, on one hand, to reconsider its restrictions in a way attuned to the times, akin to the development of the principle of human security and its focus on the security of citizens in their daily activities through the 1990s; and national information sovereignty on the other hand, as a rationale for the regulation of digital means to counter the spread of malicious propaganda.

Algorithms in the news industry: obligations of states to ensure media freedom and information rights of citizens

Online media increasingly use algorithms to produce and distribute news. Next to that, we see new players in the news industry, e.g. social media. These new technologies and participants challenge the democratic role of the media as watchdog and forum for public debate. Algorithms also change the relationship between media and audiences. News media try to give people what they want, thereby risking that citizens are less informed about public affairs. In Europe, Art 10 ECHR protects freedom of expression and information, including media freedom. Art 10 has been developed by the European Court of Human Rights, and the Council of Europe has further translated these principles into media policy guidance and obligations for States to ensure a favorable environment for free speech. This paper analyses what (positive) obligations European states have to ensure a diverse media market that delivers the news people need to fulfil their role as informed citizen in the face of algorithms.

Digital Profiling, Law and the Stakes of Personalized Governance

Digital profiling is a datamining technology that finds patterns of behavior in large amount of data, with the aim of forecasting future events by correlating data traces about past behavior. Profiling is being used by corporations and regulatory bodies across multiple domains such as security, tax, finance, and health. What is a profile and what does it say about the subject it purposes to capture? The paper analyses the commonalities and differences of various profiling practices, exposing their specific rationale: prediction, targeting, personalization. Investigating the type of governmentality algorithmic profiling is producing, the paper investigates the background rationality of profiles: subjects reduced to their traceable behavior, to correlation and repetition of itemized data. A subject at the same time hyper-contextualized and radically decontextualized, a hypervigilant citizen expected to constantly conform to an ever-changing norm.