The Human Rights-Based Approach for Digital Media


Today, we live in a digital age that allows for new forms of activism, cultural exchange, human rights protection, and global activism. These activities aren’t virtual in the sense that they aren’t real. They play an important role in the daily lives of citizens. Restrictions on the Internet, digital media, and online censorship interfere with fundamental rights and freedoms, especially freedom of information, freedom of expression, and the right to privacy.

It is not simple to determine what type of regulatory response should be taken to digital media platforms like Facebook and Twitter, due to their dual function as public and private spheres. During the last few years, platforms have become an important aspect of contemporary public life: They provide support for our social infrastructure and have even been compared with public utilities. The problem is that, because they are private services, corporate interests and commercial priorities often take precedence over public interests. The ability of these companies to turn all kinds of human activity into highly valuable data and their control over widely used public resources has made them some of today’s richest companies. By approaching this challenge from a human rights perspective, we ensure that policies are consistent across borders and aligned with international norms.

What Is Digital Media?

Digital media is any type of media that distributes information using electronic devices. Electronic devices can be used to view, modify, and distribute this form of media. Among these are websites, blogs, vlogs, social media, video, augmented reality, games, virtual reality, and podcasts. Today’s world is defined by a multitude of digital media products, enabling and delivering experiences in a wide range of industries.

What Threat Does Digital Media Pose to Human Rights?

Until recently, digital media platforms were considered the best tools for advancing democracy. Due to the use of social media by activists to organize and rally fellow citizens. The Arab Spring uprisings have been called the “Facebook Revolution.” The belief was that online platforms enable citizens to share their ideas and broadcast their everyday realities without being restricted by gatekeepers, talk freely to one another, and advocate for reform.

However, there have been doubts recently about some of the effects these digital media platforms have on human rights. An increasing number of tech-skeptics are alerting the public to the ways that these platforms violate human rights and disrupt democracy. It does not matter where you live. Digital media has been weaponized by terrorists, authoritarian governments, and foreign foes everywhere, from New Zealand to Myanmar to the United States. It was shown during the 2016 United States presidential election that bad actors can leverage digital media platforms effectively to pursue their own interests through online influence campaigns. Facebook’s failure to properly monitor what third parties collect through its platform and prevent misuse of that information was exposed in the wake of Cambridge Analytica’s revelations.

This concern extends beyond isolated incidents to the business model that underpins many of the world’s largest technology companies. Ad revenues fueling the attention economy drive companies to find ways to keep users scrolling, viewing, clicking, posting, and leaving comments for as long as possible. Consequently, how digital media platforms are currently designed has come under fire for exploiting users’ polarization, radicalizing them, and rewarding the sharing of disinformation and extremist content.

Furthermore, Amnesty International warns that Facebook and Google’s continuous surveillance of billions of people threatens human rights. This report describes how Facebook and Google’s surveillance-based business model undermines the right to privacy and threatens a range of other rights, including equal rights and non-discrimination, freedom of expression, and freedom of thought.

The tech giants dominate our modern lives with unimaginable power over the digital world through the acquisition and monetization of billions of people’s personal data. Kumi Naidoo, Amnesty International secretary general, says “their insidious control over our digital lives undermines our right to privacy and is a major human rights issue of our time.” On the other hand, the actions taken by governments all over the world can be explained by a set of universal principles. The current design of today’s dominant digital media platforms, according to a growing international consensus, poses an inherent threat to human rights and democracy. Legislators in a number of countries agree that the attention economy’s structural design has facilitated the spread of misinformation online. They argue that today’s powerful technologies have coarsened public discourse by feeding the appetite for political tribalism by serving up information – true or false – that corresponds to each user’s ideological preferences. They believe that the ways in which dominant digital media platforms filter and spread information online pose a serious political threat to both newer, more fragile democracies and long-established liberal democracies.

A shared view of the market dynamics that lead to concentration in the digital economy has also begun to develop. Competition enforcement agencies across a range of countries view data as an important source of market power that has given rise to a few dominant “Data-Opolies” that have amassed troves of users’ personal information. Lawmakers concerned about declining competition in the technology sector have argued that the digital economy does not require a whole new set of principles to guide competition enforcement, but that enforcement should home in on the ways in which large technology companies are using data to weaken competition and leverage their dominant position to strengthen their hold on the market. It is urgent that Big Tech undergo a radical overhaul in order to safeguard our core human values in the digital age.

Is There A Human Rights-Based Approach To Digital Media?

As far as human rights are concerned, digital media platforms have a significant impact on how people express themselves, find information, or encounter it. Platforms may discriminate against individuals or restrict their privacy and personal data. Unless human rights standards are translated into national regulations, private companies are not bound by human rights law. Tech giants’ roles and responsibilities are still unregulated, although they are having more impact on individual speech, public debate, discrimination, and privacy in many cases than the state is.

For example, when it comes to social media content regulation, we are dealing with both freedom of expression (ensuring legal content remains online) and enforcing the limits of freedom of expression (removing illegal content). As of now, the majority of attention has been focused on the companies’ roles in removing illegal content. As an example, the Network Enforcement Law (NetzDG) has been enacted in Germany for the past 20 years to supplement the limited liability regime that has guided internet services in Europe for the past 20 years. If companies fail to remove unlawful content in a timely manner, the NetzDG imposes substantial penalties. Other countries around the world have proposed similar legislation. In spite of Germany’s legitimate motives for such regulations (removing illegal content swiftly), it raises concerns about freedom of expression, since private companies are entrusted with a large number of speech decisions. Many freedom of expression court cases take weeks or months to resolve because the context of the case is critical to the decision. By contrast, the tech giants must make decisions on thousands of cases within hours. There is a significant risk of overregulation in such situations (i.e., removal of legal content).

In addition, the companies do not follow the safeguards for freedom of expression that a state would be obligated to follow, such as independent judicial review, oversight, and complaint mechanisms. The state has a responsibility to protect freedom of expression when prescribing private judgment over illegal content by means of laws such as NetzDG.

As far as ensuring legal content remains online, companies are not required to do so under any legal obligation. Since these are private companies, they are free to establish and enforce their terms of service and community guidelines, including those regarding speech that is protected by human rights law. Therefore, the UN Special Rapporteur on Freedom of Expression has recommended that companies adhere to international standards for free speech in their content moderation practices. Accordingly, their decisions about content must adhere to the same standards of legality, necessity, and legitimacy that bind states when they restrict freedom of expression. Consequently, company rules should be clear enough so that users can predict with reasonable certainty which content will be prohibited (principle of legality). Under human rights law, the restriction must serve a legitimate purpose (principle of legitimacy); and the restriction must be applied narrowly and without resorting to invasive measures (principle of necessity).

Why is it important to take a human rights-based approach to content moderation?

Firstly, it provides countries with national laws that undermine human rights with a framework that is based on international law. Instead of discussing whether companies should be held accountable for content (or not), we should start with protecting individual rights and freedoms, and hold states and companies accountable as well. In addition to providing a predictable and consistent basis for users to rely on, human rights law also offers social media companies a way to accommodate their users in a variety of situations.

Secondly, human rights law provides a normative baseline against illegitimate state restrictions. In response to government demands for heavy content removals or other violations of human rights, companies need guidance from soft law, such as the UN Guiding Principles on Business and Human Rights. In addition to setting standards for due diligence, transparency, and remediation, the guiding principles specify how policies, practices, and products should be implemented. Standards of this nature have been long overdue, and holding companies accountable for their human rights impacts is crucial.

Thirdly, human rights law is grounded in a societal vision that supports a range of different and potentially conflicting viewpoints by ensuring inclusive, equitable, and diverse public participation. Moreover, it restricts content that incites violence, hate, or harassment aimed at silencing individuals, minorities, or specific groups. Content moderation is envisioned as a framework that incorporates both free speech and protections against abuse, violence, and discrimination while paying particular attention to vulnerable groups and communities at risk.

The Rule of Law in the Digital Environment

In a rule of law system, all individuals, institutions, and entities, both public and private, including the state itself, are responsible for laws that are promulgated publicly, enforced equally, adjudicated independently, and comply with international human rights standards. Symbolically, it means adhering to the concepts of the supremacy of law, equality before the law, accountability to the law, fairness in the application of the law, separation of powers, participation in the decision-making process, legal certainty, avoidance of arbitrariness, and procedural and legal transparency.

Human rights bodies around the world have also adapted elaborate “rule of law” tests developed by the European Court of Human Rights. These criteria should be met before any restrictions on fundamental rights can be considered legal, providing that they are based on clear, precise, accessible, and foreseeable legal rules and that they serve clearly legitimate objectives. They must be “necessary” and “proportionate” relative to the relevant legitimate objective (within a certain “margin of appreciation”); and there must be an “effective [preferably judicial] remedy” to address any violations.

“Everyone,” without discrimination.

Human rights must be accorded to “everyone”, every human being. Human rights have been fundamental to international human rights law since 1945 – that is, human rights are people’s rights, not citizens’ rights. The law of all states relating to human rights must, with very limited exceptions, apply equally to all those that are affected or interfered with by the law, with no discrimination “of any kind”, including discrimination based on nationality and residence.

A number of international human rights treaties, including the International Covenant on Civil and Political Rights (ICCPR) and the European Convention on Human Rights (ECHR), mandate that states guarantee to protect the human rights contained in those treaties to “everyone subject to their jurisdiction”. Recent decisions by the European Court of Human Rights and the Human Rights Committee have emphasized the functional rather than territorial nature of this requirement. To put it another way, every state must protect or ensure these rights to everyone under its physical control or whose rights will be affected by its actions (or its agencies). Therefore, states must comply with their international obligations to protect human rights whenever they take actions that may affect the human rights of individuals, even when they act extraterritorially or take measures that have extraterritorial effects.

According to the GDPR, this obligation applies to individual data, especially personal data, because it protects every person whose personal data is processed by European controllers, irrespective of their nationality or place of residence. In order to protect the rule of law on the Internet, competing – and conflicting – national laws must be addressed urgently with respect to digital media materials and Internet activity.

Digital media and the protection of human rights:

Article 19 of the UN’s Universal Declaration of Human Rights states:

“Everyone has the right to freedom of thought and expression; this includes the freedom to hold opinions without interference, to seek, get, and impart information and ideas through all media [emphasis added], regardless of frontiers.” “States still have the ability to decide which government information should be made public or protected.” However, it is well-known that freedom of expression and freedom of speech are intertwined with freedom of the media/press”.

Access to the media is a fundamental human right. But what about the media’s responsibility for protecting human rights? Media freedom is essential to human rights because information is the key to staying informed about local, national, and international issues. Ignorance limits the public’s ability to respond to laws and policies, as well as human rights violations. The responsibility of the free media is to communicate information and make it accessible to the public in a clear and understandable manner. The media also has a responsibility to hold those in power accountable.


Digital media freedom refers to the right of various platforms to operate freely in society without interference by the government or restriction by law. Nevertheless, one of the most pressing challenges of our time is the question of how these digital media platforms can protect others against harm while respecting the freedom of expression rights of their users.

The paper addressed the pressing issue of ensuring the rule of law is established on the Internet and elsewhere in the digital world. The paper also outlined the digital environment and the threats it faces. Moreover, the paper examined the need to use human rights-based approaches to digital media as well as international standards of the rule of law, while pointing out some issues in applying the law in this new environment.


Inside Egypt’s ‘Facebook Revolution,’” MIT Technology Review, April 29, 2011,

Kevin Roose, “A Mass Murder of, and for, the Internet,” New York Times, March 15, 2019,

Cecilia Kang and Sheera Frenkel, “Facebook Says Cambridge Analytica Harvested Data of Up to 87 Million Users,” New York Times, April 4, 2018,

Zeynep Tufekci, “Russian Meddling Is a Symptom, Not the Disease,” New York Times, October 3, 2018,

The Digital, Culture, Media and Sport Committee in the United Kingdom’s House of Commons recently released a report on “Disinformation and ‘fake news’” which observes “people are able to accept and give credence to information that reinforces their views, no matter how distorted or inaccurate, while dismissing content with which they do not agree as ‘fake news.’” (“Disinformation and ‘fake news’: Final Report,” United Kingdom House of Commons’ Digital, Culture, Media and Sport Committee, February 14, 2019,

A French report on “Information Manipulation,” for example, notes that the “largest Western democracies are not immune.” (“Information Manipulation: A Challenge for Our Democracies,” Policy Planning Staff in the Ministry for Europe and Foreign Affairs and the Institute for Strategic Research in the Ministry for the Armed Forces, August 2018,

Big Technology Companies and Freedom of Expression.

A Human Rights-Based Approach to Social Media Platforms;

For instance, “MedRed BT Health Cloud will provide public access to aggregated population health data” extracted from the UK National Health Service’s databases: did/1112224?goback=.gde_2181454_member_5807652699621048321# (November 2013).

Guide to human rights for Internet users, contained in an Appendix to Recommendation of the Council of Europe’s Council of Ministers CM/Rec(2014)6 of 16 April 2014, available at: https://