Contemporary legal systems are insufficiently equipped to cope with potential issues arising from developments in AI. Systems using AI are becoming more autonomous in a sense of complexity of tasks they perform, their potential influence on the world and decreasing ability of humans to understand, predict or control their functioning. The functioning of autonomous agents, taking into account the abovementioned features, is not always predictable, while predictability is critical to modern legal approaches. The system that learns on the basis of information it gets from the external world can act in a way, which could not have been predicted by its creators. Therefore, optimal regulatory framework is needed, on the one hand, not to inhibit development and deployment of innovations; on the other hand, such framework must be capable of preventing the contingent risks. This paper looks into the ways to adapt current legal constructs to changing circumstances and proposes new approaches.