Distributed Ledger Technology (“DLT”) can facilitate international cooperation between parties reluctant to rely on data generated outside of their own jurisdiction. By establishing decentralized and immutable registries, DLT can solve many challenges, including monitoring cross-border supply chains; implementing ‘cap and trade’ systems for gas emissions; and promoting investments in developing countries. To fulfill its potential, international use of DLT must be standardized and coordinated. However, national regulators have mostly opted to enact geographically bounded rules; thereby neglecting the international aspect and creating legal uncertainty. We address two key aspect: (1) how to overcome examining existing barriers to the development of global regulation by looking at other instances where global regulations did evolve (e.g. financial reporting, technological standardization, and anti-money laundering); (2) how to achieve an efficient regulation in terms of content.
This contribution discusses the potential for international regulatory convergence of AI regulation, against the backdrop of the challenges raised by technological disruption and extraterritoriality.
An informal, international consensus has emerged around a few principles of AI regulation, like explainability. Building on such principles, the Draft EU Regulation on AI has the potential to set the regulatory standard for the world.
However, such influence will be stronger in some jurisdictions, like the US and UK. Instead, other important jurisdictions, like China and Singapore, while retaining the same overarching principles, will adopt completely different regulatory approaches, rooted in their economic, sociological and philosophical structures.
In the years to come, the international regulatory framework of AI will be patchy. Different AI regulatory “poles” will emerge, and stakeholders will have to be aware of regulatory diversity.
Days after the invasion of Ukraine by Russian forces, Facebook and Twitter announced a total ban on paid ads from Russian state-funded media channels. After a mutual escalation, Russia’s telecom regulatory authority, Roskomnadzor (Роскомнадзор), decided to block access to both platforms within the national territory. Moreover, the Russian parliament passed a new “fake news law” to criminalize the distribution of “false information” about the conflict in Ukraine, such as calling it an “invasion”. The clash between Russian authorities and social media companies can teach us a few lessons about content moderation and platform governance both during and after wartime. In this paper, I explore these lessons from a digital constitutionalist perspective and try to answer the following questions: what the isolation of Russia from the West means for content moderation and platform governance in a postbellum period? Should platforms compromise their policies to dodge a full-scale blockage?
In recent years, the growing use of artificial intelligence technology has made significant contributions to social society. However, AI can also be a double-edged sword of modern science and technology due to its destructive potential and inherent risks.
The EC recently published the proposal for an EU Artificial Intelligence Act. The UNESCO adopted the Recommendation on the Ethics of Artificial intelligence.
The paper looks at the AI development strategies adopted by Russia and China. It uses Anchoring Vignettes as a case analysis to highlight the ethical issues raised by AI, such as human alienation, privacy issues, injustice, and responsibility attribution, and what legal instruments should be available to tackle these concerns and related challenges. It argues that a more holistic approach is necessary to unlock the full potential of AI technologies in an ethical, transparent, and safe way, both nationally and globally.