Battle to evade democratic law seen in Google Spain.Google said automation shields intermediary using tech from legal responsibility making true Barlow’s idea tech would win over rule of law as traditional government has no sovereignty here2)should regulatory responsibilityexist, can only be one global regime, presumably the US, thus hard for nonUS residents to seek justice.Solution is for democracies to come together to devise a framework.Critics say impreciselaw shouldn’tbe adopted.Yet law acquired by democratic process requires compromise.Compromise of texts in democracy fulfil their function as compromise is reached, progressing towards consensus on rules we wish to live by.Laws are to be applied/interpreted by humans who can reason.Later interpretation by judges gives flexibility to adopt new requirements sans rewriting.Requiring precision/rewriting is antidemocratic;ignores democratic deliberation/compromise/time for due process.
This paper seeks to investigate the role played by the European Union in regulating cyberspace, looking to protect its fundamental principles in the virtual world. To do this, it compares the European Union regulatory framework for cyberspace with the Pegasus case, in regard to the indiscriminate use of the spyware produced by the NSO Group. It starts discussing the regulatory advances made by the organization, aiming to guarantee its security and promote values such as democracy and human rights in cyberspace. This exhibition is carried out jointly with an analysis of the context in which these measures were taken. Then, the study proceeds with a specific examination of the Pegasus scandal, explaining its circumstances and comparing it with the regulatory framework previously exposed, particularly the cyber sanctions regime of the European Union. The article ends by noting that the Pegasus case should be seen as an opportunity to improve the European Union’s framework for cyberspace.
Granting rights to non-humans has been a growing stake worldwide in recent years. The Constitution of Ecuador codifies the Rights of Nature, based on the idea of good living. Rivers from India, New Zealand, became right-holders in 2017, with legal personhood recognized. The discussions concerning the bestowing of rights to non-living beings, like AI, robots, are also becoming a vibrating agenda, with great scholarly engagement. This contemporary trend of expanding legal subjectivity to non-humans, although related to new ethical and technological challenges, is permeated by familiar but little assessed promises and pitfalls present in granting constitutional rights to corporations. By discussing the implications of corporations, AI, and other non-humans being considered as bearers of human rights, we highlight the risk of dehumanizing human rights, especially when it threatens the fundamental rights of politically vulnerable groups.
The place of autonomous robots amid the legal system is a topic that has been increasingly subjected to focus of governants (in both states and international organizations like the EU or the UN); but unfortunately its treatment too often remains fantasist as based on works of fiction. While the issue is fundamentally not new (consider, eg, debates over animals or foreigners throughout history), autonomous robots stand out in their capacity to produce a will of their own seemingly based on pure reason.
Here, I argue that two questions must be distinguished when it comes to legal personhood: 1) the formulation of a will, and 2) its attribution to a subject. In this perspective, robots being attributed a will of their own depends on a value-based choice that goes beyond the legal field and concerns society as a whole. However, robots may also play a role in formulating the will to be attributed to some other subjects, eg incapacitated persons, animals, or things.
Days after the invasion of Ukraine by Russian forces, Facebook and Twitter announced a total ban on paid ads from Russian state-funded media channels. After a mutual escalation, Russia’s telecom regulatory authority, Roskomnadzor (Роскомнадзор), decided to block access to both platforms within the national territory. Moreover, the Russian parliament passed a new “fake news law” to criminalize the distribution of “false information” about the conflict in Ukraine, such as calling it an “invasion”. The clash between Russian authorities and social media companies can teach us a few lessons about content moderation and platform governance both during and after wartime. In this paper, I explore these lessons from a digital constitutionalist perspective and try to answer the following questions: what the isolation of Russia from the West means for content moderation and platform governance in a postbellum period? Should platforms compromise their policies to dodge a full-scale blockage?