In recent times a widely discussed topic is the development of new technologies, collectively referred to as artificial intelligence (AI). This term covers various technologies constructs: blockchain, algorithms, Big Data, etc. The rules of their operation are not fully recognized so far, and impede the feasibility of designing a new legal architecture, including the rules of legal liability for human rights violations caused by or associated with AI. One could observe a tendency to restrict human accountability. It poses serious threats to the preventive purpose of legal liability itself. In my presentation, I will discuss two elementary issues. First of all, the recommended regulatory framework, and in particular the adequacy of so-called soft law and self-regulation mechanisms. Second of all, model principles of responsibility at the stage of designing, deployment, implementation and application of AI technologies.