The article aims to evaluate the possibility of automating legal argumentative practice using deep learning technology. As specific objectives, it seeks to present an overview of the legal argumentative practice and its importance for the application of the Law, through approaches linked to different legal paradigms; to discuss deep learning, its concept and characteristics, and its incursion in the application of the Law; and problematize, from the philosophical point of view of the Linguistic Turn and the ideas of Martin Heidegger and Hans Georg Gadamer, the effective performance of legal argumentation by robots, balancing its requirements with the limitations inherent to deep learning. In final considerations, the insertion, in the legal discourse, of data emerging from deep learning, is presented as a possibility of reconciling both.
Technology has been (and still is) one of the most difficult challenges for public law, in particular for constitutional law.
In the course of history, the evolution of technology has forced mankind to forge new law to face new challenges and new problems: every great invention has always required the intervention of law.
Law inevitably reacts to changes in society and technology: an evolving society must be matched by a law able to adapt to different contexts.
The influence of technological factor on the evolution of modern state isn’t a new phenomenon.
But today this phenomenon has reached a dimension that, not too many years ago, was not even imaginable.
The stability of constitutional principles has become even more difficult since the advent of the Internet, defined as “the new horizon of contemporary constitutionalism”.
Public law is therefore called upon to an important, or perhaps epochal, intervention in response to the issues raised by the so-called digital revolution
In 2021, the European Commission published the proposal for an EU Artificial Intelligence Act (AIA), which specifically aims to prohibit “an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour.” The paper argues that this prohibition is one of the key clauses in the AIA, given the growing technological means by AI and related technologies, such as brain–computer interfaces (BCIs), functional magnetic resonance imaging (fMRI), robotics and big data etc, to interfere with a person’s thoughts and behavior as well as to cause wider societal harms ranging from inciting hatred and violence to the interference with the outcomes of elections. Given the so-called “Brussels effect” and the growing global interconnectedness of technologies, this paper argues for the need to address the issue of subliminal AI systems globally as it resonates in the 2021 UNESCO Recommendation on the Ethics of Artificial Intelligence.
The emergence of new technologies presents itself as a challenge for regulatory initiatives that aim to deal with the effects (economic or social) of the implementation of those innovations.
In this paper, a specific approach was proposed for the phenomena (Artificial Intelligence as an infinite game, regulation as a finite game), in order to enable the identification of their ontological differences — mainly regarding the way they deal with time and change — and, based on the framework adopted, develop a typology of regulatory strategies related to the regulation of Artificial Intelligence.
Starting from explanatory variables — the epistemic dimension (risk or uncertainty) and the dimension of prioritization of regulatory policy (prioritizing fundamental rights or innovation) — four types of hypothetical regulatory strategies were proposed (precaution, flexibility, anticipation and rigidity) that would shape the profile of the regulation possibly applied in each context.