Giving reasons: incompletely theorized agreements and incompletely explainable machines

In certain cases it is easier to agree on an outcome of specific dilemma than on the rationale behind such outcome. This idea of ‘incompletely theorized agreements’ allows explaining how the societies manage to govern themselves despite the deep divisions on certain points – only the most crucial parts of the social contract need to be accepted while on certain points it is enough to agree to disagree. Sometimes the outcome of deliberation is intuitive and it is enough to claim ‘I know it when I see it’, as famously stated of pornography by the US Supreme Court Justice. The paper discusses this idea in the context of the nature of decisions made by the AI-based systems. The legitimacy of human decision-making is confronted with the coded nature of AI, where ‘intelligent agents’ interpret data and learn from it to achieve specific goals without human intervention. The non-intuitive nature of non-explainable machines poses a dilemma to legal theories of legitimacy.