Traditional media outlets have always selected what information to broadcast. Nowadays, much content moderation is performed by social media platforms, raising serious concerns in terms of digital exclusion. In a world where just Facebook or YouTube moderate billions of expressions, it is time to think about how to mitigate this form of digital exclusion. Social media usually performs content moderation by implementing automated systems which can quickly delete vast amounts of content based on racial or gender stereotypes. In this process, social media, as private actors, are not obliged to respect fundamental rights, democratic values or be transparent regarding their decision-making processes. Apart from the recent proposals in the EU framework (e.g. Copyright Directive), users cannot access the reasons why specific content has been removed. This paper proposes a new regulatory framework based on new transparency and accountability obligations for online platforms.
In the last decades the public and the private sectors have become increasingly digitized. Digital literacy appears to be the key skill required to comply with the digitization of public services and ensure that millions of citizens remain employable at a time when there is growing need for technology knowledge. However, few countries offer training on digital literacy. Thus far, the right to education has focused on access to education and non-discrimination, whereas substantive requirements have been limited to the promotion of minimum educational standards and training that enables all persons to participate effectively in a free society. This paper inquires whether digital literacy should be considered as part of the right to education. It argues that in the digital age, investing in improving the digital literacy of children and young adults will be needed for adequate participation, as recently highlighted by the Council of Europe.
The rise of algorithms, Artificial Intelligence, Big Data analysis and the Internet of Things, has the potential to affect fundamental rights in new, yet unfamiliar ways. This paper reviews the consequences of algorithmic-based decision-making for fundamental rights. It does so from the perspective of European fundamental rights law, devoting its attention to four clusters of fundamental rights: (1) privacy (the right to a private life, personal autonomy, human dignity and data protection); (2) equality; (3) freedom (freedom expression, information and religion); and (4) procedural rights. This approach enables the identification of cross-cutting fundamental rights issues and problems, such as the growing relevance of fundamental rights horizontal relations and the exclusionary effects of algorithmic ubiquity. These issues require primary attention when it comes to assessing the potentially damaging effects of the widespread use of algorithms and algorithmic technologies.
This paper analyses the Brazilian normative framework for digital inequality targeted public policies. It demonstrates that, beyond the equality principle provided by the Federal Constitution, the Brazilian Internet Bill of Rights (Marco Civil) also establishes a set of inclusion and internet openness commands (e.g., Open Internet) that apply to both private and public sector. This Bill of Rights binds present and future digital policies to favor actions against digital exclusion and it is important as the inequality that affects Brazil's social, economic, and political landscape is also reproduced in the country's digital divide. While access to the Internet has improved, digital inequality is still reflected in the widespread practice of zero-rating, the call for enhaned digital literacy and technological social inclusion initiatives. This paper argues that digital inclusion is a constitutional and legally regulated command by the Brazilian Internet Bill of Rights.