Digital personality and social credit: The new social contract or the end of constitutionalism?

The COVID-19 pandemic has forced governments to provide social distancing and track chains of infection, and, at the same time, spurred the implementation of cutting-edge digital technologies to the field of public administration. The use of Big Data and AI-based algorithms in governments’ regulatory and supervisory activities opens the door to the development of state-run information systems collecting and assessing various types of reputation data on subjects. The creation of digital profiles increases the effectiveness of state agencies’ administrative activities and allows governments to make social interactions more predictable. The introduction of digital personality institution, tightly connected to the social ranking and presuming the separation of “the sheep from the goats”, however, threatens citizens’ rights and the rule of law. This paper examines the recent experience of Russia and China in the transition to the governance through digital profiles and reputation scores.

A Crack in the Algorithm’s Façade

Social aid and health resources are scarce and their fair distribution a complex task – especially in times of a pandemic. Probabilistic algorithms seem to offer an easy way-out as they allocate resources based on data and math. However, academic research reveals that probabilistic algorithms inherit “data biases” resulting into disadvantageous effects for marginalized social groups. It seems that built on efficiency is a façade of statistical neutrality behind which the State hands over its responsibility to the algorithm and where inequalities can quietly cement. I argue that fundamental and human rights in the EU crack this façade. Societal data biases translate into – what I coin as – the harm of generalization, touching upon fundamental rights to autonomy and to equal treatment as well as to the harm of cementing societal biases touching upon the right to non-discrimination. To demolish the façade the enforcement of these rights must be supported by a rights-based regulation.

Fair Governance with Humans and Machines

How fair are algorithm-assisted government decisions? Using a set of vignettes in the context of predictive policing, school admissions and refugee relocation, we explore how different degrees of human control affect fairness perceptions and procedural preferences. We implement four treatments varying the extent of responsibility delegation to the machine and the degree of human control over the decision, ranging from full human discretion, machine-based predictions with high or low human control, to fully machine-based decisions. We find that machine-based predictions with high human control yield the highest and fully machine-based decisions the lowest fairness score. These differences can partly be explained through different accuracy assessments. Fairness scores follow a similar pattern across contexts, with a negative level effect in the predictive policing context. Our results shed light on the behavioral foundations of several legal human-in-the-loop rules.

Democracy and Human Rights in an Algorythmic Society

With the rise of digital technology, algorithms actually change the man who uses this tool, and with him they change the society. Gradually, the right to have rights is becoming contingent on consenting to give up one’s informational identity in favour of its usually anonymous controllers. Algorithmization may provide a bedrock for a totalitarian democracy, which rules out freedom and which may help promote inequality.
Human civilisation is about generating knowledge and morality. Algorithms do not generate knowledge and moral judgements.
The right for people to be in the power of people, not algorithms, can only be exercised if the humankind retains the capacity to think free of interferences that may identify thoughts, disable the free-thinking capacity itself, or impact the contents of thoughts. The dignity of the human being includes the capacity to make decisions; a decision has its roots in Man, not an algorithm.

Artificial Intelligence & Refugee Credibility Assessments: Exposing Flaws, Revealing Opportunities

The growth of AI in migration and border control makes the potential for its application to refugee status determination (RSD) a real possibility. However, flawed systems can pose traps for new technologies. AI may improve the consistency, efficiency and accuracy of RSD, but as long as the ‘well-founded fear’ standard remains bipartite, it is unlikely to address the issues that vex credibility assessments. AI will struggle to support the determination of subjective fear, which is already challenged by the limited human capacity to judge the credibility of other humans. If data carries the unconscious biases of the developer, the machine will learn to replicate them. AI’s limited ability to read emotions present challenges in a context defined by vulnerability. The prospective nature of fear is counter intuitive if algorithms learn through historical data. If a ‘well-founded fear of being persecuted’ were to be based on objective risk only, AI’s place within RSD could be justified.