The rapid development of IT technology has made it imperative that a constitutional and international dimension be imparted to the right which stipulates that no legislative, executive and judicial prerogatives may be ceded to software or AI contraptions and that key state-level decisions must be made by humans according to the new maxim „habeas potestatem humanam”. The right to personal inviolability needs to be reinterpreted. Given the potential threat of interference in a person’s mind without that person’s consent, it is imperative that constitutions and international law proclaim the principle of “habeas mentem”. But if artificial intelligence were to make decisions, then it should be properly equipped with value-based criteria. Those establishing the criteria for autonomous choices to be made by neural networks and algorithms will wield a new kind of power – the power to impact the awareness of good and evil. Lest such power breed a new totalitarianism, it needs legitimisation, compliance with human rights, transparency, control and countervailing.
In certain cases it is easier to agree on an outcome of specific dilemma than on the rationale behind such outcome. This idea of ‘incompletely theorized agreements’ allows explaining how the societies manage to govern themselves despite the deep divisions on certain points – only the most crucial parts of the social contract need to be accepted while on certain points it is enough to agree to disagree. Sometimes the outcome of deliberation is intuitive and it is enough to claim ‘I know it when I see it’, as famously stated of pornography by the US Supreme Court Justice. The paper discusses this idea in the context of the nature of decisions made by the AI-based systems. The legitimacy of human decision-making is confronted with the coded nature of AI, where ‘intelligent agents’ interpret data and learn from it to achieve specific goals without human intervention. The non-intuitive nature of non-explainable machines poses a dilemma to legal theories of legitimacy.
In recent times a widely discussed topic is the development of new technologies, collectively referred to as artificial intelligence (AI). This term covers various technologies constructs: blockchain, algorithms, Big Data, etc. The rules of their operation are not fully recognized so far, and impede the feasibility of designing a new legal architecture, including the rules of legal liability for human rights violations caused by or associated with AI. One could observe a tendency to restrict human accountability. It poses serious threats to the preventive purpose of legal liability itself. In my presentation, I will discuss two elementary issues. First of all, the recommended regulatory framework, and in particular the adequacy of so-called soft law and self-regulation mechanisms. Second of all, model principles of responsibility at the stage of designing, deployment, implementation and application of AI technologies.
Artificial intelligence (AI) has the potential to transform both the practice of law and academic reflection on law. However, the technology is still at a very early stage and there are signigificant limitations on what can be achieved. In this paper, I will briefly sketch the state of the art of applications of machine learning to predicting outcomes of court cases, chiefly in Canada and in the United States. I will then focus on the challenges of this kind of research in UK law (like access to texts of court judgments). Finally, I will discuss the “proof of concept” project I developed on predicting case outcomes in a small area of UK public law.
We live in a world where governments, in making their policy decisions, increasingly rely on big data and their interpretation by artificial intelligence ( AI). AI makes data combing and analysis more effective than ever before: it can help to make inferences from existing data, and these inferences can be grounds for making assumptions from the available data. For individual citizens, on the other hand, it is increasingly difficult to protect their personal data and defend themselves from various unwanted biases in the interpretation of those data. While the data protection regime of the European Union and its interpretation by the European Court of Justice provide some defence for the rights of the individuals, it does not protect them sufficiently from the harmful inferences that could be made from the existing data. The current data protection regime seems to be unprepared for the fourth industrial revolution of which automated data analysis is an essential part.
The idea of nation-state and eventually the way of public law is changing in various ways with new technology such as artificial intelligence (AI). Utilization of artificial intelligence is rapidly spreading to various media used by ordinary consumers and home environment that have IoT devices. At the same time, administration and the judiciary section are also starting to try to utilize AI technology and, in some cases, decision-making process is expected to be replaced by AI. Such progress in artificial intelligence technology may change the traditional public law framework to a regulatory framework based on AI-based architecture. Based on the progress of AI technology and its utilization, this paper analyzes the time of change with arising (or expected) risks when AI cannot solve the problem or misuse the data and so on, and confirms the importance of public law, its functions to form a framework to prepare measures to divert such risks over the long term.