Algorithmic discrimination is when computer applications that are supposed to operate in way that is equal for all turn out to be biased. De-biasing software is designed to neutralize statistics that reproduce structural inequalities that help create algorithmic discrimination. However, it is unclear what kind of legal principles should determine the content of de-biasing software to alleviate legal responsibility for algorithmic discrimination. An algorithm may or may not be legally discriminatory depending on the legal and factual context. We argue that the frequent calls for purely ethical approaches to AI are misguided because there can be no catch-all solution to the problem of what, in a specific context, constitutes equality. This paper asks: what constitutes algorithmic discrimination and what are the duties for both public and private entities under EU and HR law surrounding the use of de-biasing techniques for those algorithms?
Our next Annual Conference will take place from July 6-9, 2021. It will be held in a completely novel way as a fully online Conference: ICON•S Mundo.
The Call for Papers for ICON•S Mundo is now closed. Successful applicants have been notified. You can access the preliminary program via the ICON•S HUB.
All panelists had register until June 10, 2021.
Log in to your ICON•S account to access the preliminary program for #iconsmundo.Log In