Legal structure and risk control of AI are more and more developed and evolving in many countries including Japan. However, there has been little debate on the question of using experts for important decisions related to the utilization of AI technology. And when to use the experts at the administrative stage or in the parliament in the form of legislation. Plus, there is a situation in which policy issues on AI are determined without much discussion in the legislative process. This paper discusses the governance of risk control of AI in the context of utilizing the expertise. Technologies of AI should be decided with experts before making guidelines or legislation to balance the risk of AI technology utilization and AI utilizing society (Society 5.0 in Japan). Then, to verify the regulation, the importance of keeping the transparency in the decision-making process should also be kept. This paper introduces the cases of Japan.
This paper discusses the impact of digitalisation on public law – focussing on structural changes. It asks whether such changes are indeed observable and to what extent they are caused by digitalisation. This assessment aims at determining whether digitalisation creates, or contributes to creating, a new “type” of public law. The paper focuses on two indicators: (1) actors and (2) legal and institutional structures.
It addresses questions such as the following:
– on actors: which actors participate in law-making and in adjudication; which actors have the broadest influence on shaping the law; is there a difference in how certain actors use existing rules
– on legal and institutional structures: do we see the creation of novel institutions (including courts); does the effectiveness of existing institutions change; to what extent is normative regulation formalized or informal; are there shifts in the (perception of) coherence of certain sets of legal norms
Although the first law of robotics formulated by Isaac Asimov in 1942 is not yet a legally binding principle, there is already an in-depth debate on the legal and ethical aspects of using Autonomous Weapon Systems on the battlefield. The considerations in this area should be supplemented with a thorough analysis of the use of artificially intelligent robots in law enforcement operations, when the use of force may sometimes lead to injury or even death of human beings. In this kind of operations, the use of force is subject to a number of restrictions related to the need to protect human life. The use of force must, therefore, be absolutely necessary and proportionate to the intended purpose and must be preceded by all possible precautionary measures. The proposed paper is intended to explain these requirements and to answer the question whether artificially intelligent robots are able to maintain the high legal standards of protection of individuals during law enforcement operations.
The general objective of this paper is to outline a concept of Sustainability which may respond to the risks of new technologies, typical of the Fourth Industrial Revolution, called Emerging Technologies, without jeopardizing the opportunities they present. As specific objectives, it seeks to present two approaches to Sustainability, top-down and bottom-up, and their relationship with technology in general; next, to describe the Emerging Technologies and the precautionary and promotional principles, linking them to the two approaches to Sustainability; and to make use of legal pragmatism to read the precautionary principle, in such a way that is able to support a third approach to Sustainability. In final considerations, Pragmatic Sustainability is presented as a contribution to the debate regarding the Sustainability of Emerging Technologies.