Algorithmic discrimination is when computer applications that are supposed to operate in way that is equal for all turn out to be biased. De-biasing software is designed to neutralize statistics that reproduce structural inequalities that help create algorithmic discrimination. However, it is unclear what kind of legal principles should determine the content of de-biasing software to alleviate legal responsibility for algorithmic discrimination. An algorithm may or may not be legally discriminatory depending on the legal and factual context. We argue that the frequent calls for purely ethical approaches to AI are misguided because there can be no catch-all solution to the problem of what, in a specific context, constitutes equality. This paper asks: what constitutes algorithmic discrimination and what are the duties for both public and private entities under EU and HR law surrounding the use of de-biasing techniques for those algorithms?