Combating Hate Speech on the Internet

Introduction

As of October 2022, around 5.07 billion people were connected to the Internet, representing 63.5% of the 7.99 billion people inhabiting our planet. Almost 60% of the world’s population (4.74 billion people) use social media. For most of these people, social media platforms have become their main source of news and information, as well as thoughts and views.

Besides being the main source, cause, and field of an unprecedented rate of production, circulation, and exposure to information, social media is the first means of communication that allows everybody to express their opinions free of the limits of status and expertise usually imposed by other media means.

This, of course, has a great positive impact on the practice of the right to free expression as it allows many more people to express their thoughts in public. However, as much more speech, of all kinds, is disseminated in public, much more of it is harmful speech. Also, as many more people are exposed to much more varied speech through social media, many more people are consequently exposed to much more harmful speech. Hate speech is, arguably, the most prevalent type of such harmful speech.

Many incidents that have taken place throughout the last few years have made it clear that hate speech in general, and hate speech on the Internet, in particular, can threaten the well-being, safety, and even the lives of people, especially those belonging to racial, ethnic, religious, or sexual minorities. The harm that can be confidently attributed to the spread of hate speech can escalate to incitement of genocidal actions. States party to the International Covenant on Civil and Political Rights are obligated to criminalize by law hate speech that can be proved to amount to incitement of discrimination, or violence against vulnerable groups or members of such groups. When such speech is disseminated through social media platforms, however, there arise many issues concerning the detection of and dealing with it.

Besides the fact that Cyberspace has many unique characteristics making it more difficult to identify hate speech on the Internet, the cross-border of the Internet gives rise to many issues concerning who has a justified right, versus who has the ability to deal with online hate speech. The paradox of social media platforms being privately owned and managed by companies, while being the main venue for most people to practice their right to free expression, creates additional issues. The most obvious is that companies are not bound by the International Human Rights Law, thus they are not obliged to observe the rules of this law that strive to balance the protection of people against hate speech consequences and the protection of their right to freedom of expression.

The threats posed by the phenomenal spread of hate speech on the Internet have incited both states and companies to take action. Besides the fact that such efforts are inconsistent on a global scale, they are not guaranteed to observe people’s rights, especially freedom of expression and privacy. Tackling hate speech on the Internet needs a different approach other than states’ legislation or companies’ specific content moderation rules. Such an approach is required to be consistent globally to be effective, and it is required to balance people’s rights to security and to freedom of expression and privacy.

This paper seeks to provide enough information for forming a clear picture of the issues concerning hate speech on the Internet and combating it. It also seeks to offer pointers to an alternative approach to dealing with hate speech on the Internet in a way compatible with the balanced observance of human rights.

How Hate Speech on the Internet Is Different

Many online content characteristics make the content and potential spread and impact of speech when published on social media quite different from what would be the case if it were published through traditional media.

A user of social media can have a level of anonymity hardly possible in the case of traditional media. Anonymity in many cases means immunity to consequences, so it emboldens users who may make statements online that they would hesitate to make anywhere else, especially concerning sensitive and/or controversial issues.

The more controversial content is, the more likely it would capture the attention of social media users, and the more likely they would reshare it to harness the attention of others. This doesn’t necessarily mean that people resharing content are willing to advocate its statement, or even that they approve of it. Sharing content to express disapproval of its statement still helps spread it.

Content that is more likely to capture users’ attention is further boosted by online platforms’ recommender algorithms. Such algorithms also make content available to users based on the content they previously engaged with the most, which makes it more likely that these users would engage with the algorithms’ recommended content, usually by resharing, referencing, and linking to it. Algorithms do not differentiate positive from negative engagement, they also tend to recommend content based on personal data including race, ethnicity, nationality, religion, etc., so they may simply recommend content to users who are specifically targeted by its insulting, demeaning, or threatening statement.

Trendy online content tends to live longer and resurfaces more often, which means that a piece of hate speech on the Internet will continue to exist, taking different shapes and migrating to different platforms, each time causing whatever harm it’s capable of producing. It will be recirculated, reposted, and reshared on every relevant occasion. It will also keep piling up with similar speech, with each validating and reinforcing the other.

In countries where other venues for expression are severely restricted, or even entirely nonexistent, online speech has even more impact and becomes more manipulatable by organized groups willing to express hatred against minority groups.

In the report issued by the independent international fact-finding mission on Myanmar in August 2019, the mission describes how both the restrictions imposed on free expression and the high rate of hate speech on the Internet combined had catastrophic consequences, leading to more violence against the Rohingya Muslim minority in Myanmar. In the same report, the mission held Facebook responsible for failing to tackle the spread of hate speech against the Rohingya people through its platform.

In conclusion, hate speech has a better chance to be published on the Internet through social media platforms compared to other media means because people can feel more confident to express controversial thoughts with a more aggressive tone online. Hate speech also has a better chance to spread faster and farther on the Internet, due to the different factors governing the propagation of content online, especially through social media platforms.

Identifying Hate Speech on the Internet

The first step required for combating hate speech on the Internet is being able to identify it. It is not always easy to identify hate speech in the real world, and it is even more difficult to identify it when disseminated through social media platforms, due to the unique modes of interaction available online that have no counterparts in the real world. For instance, a mocking laugh in the real world lasts for a few seconds and can be seen by a very limited number of people, however, a mocking Haha reaction on a Facebook post will last as long as the post exists and will be seen by an unlimited number of people.

The dynamics of online interactions, especially on social media platforms, make it so tricky, most of the time, to determine if some instance of online expression constitutes hate speech and, if so, that it poses a real threat.

Users of social media platforms have developed forms of expression specific to digital interactions online. Some of these forms depend on the features provided by these platforms, like mixing text, photos, animation, and video, or the use of reaction emojis, contextual replies to others’ speech, and memes. Other forms of expression adapt to the limits of specific platforms, like using abbreviations and omitting parts of text implied by context to keep posts brief and within the limit of a specific maximum number of characters.

Such forms of expression lead in many cases to subtlety and contextuality. Subtlety means that some speech can for all purposes seem quite harmless, while it is harmful to its target. Some very harmful expressions may be as subtle as a mere Haha reaction to a post. While subtle hate speech may be understood for its targets more than others, contextual hate speech is understood by most people sharing a common context. Such context may be limited in scope or so widely shared that millions will take notice of the meaning of speech through it.

The contextuality of online speech in general is added to the cultural contextuality. Speech constituting incitement that threatens people’s lives and safety somewhere in the world may be of no consequence somewhere else. Things become even more complicated as both contexts are changing all the time, sometimes in quite short spans of time. Some contextual factors might even be temporary, so a hypothetical investigation of hate speech after the fact might not be able to reconstruct the context of the speech at the time it was made, as this context has already been lost.

Who is Responsible for Combating Hate Speech on the Internet

Social media platforms like Facebook and Twitter are cross-border spaces by virtue of being accessible through the Internet. This means that anybody anywhere in the world, who can connect to the Internet can use them for expressing their thoughts and ideas in texts, images, video, audio, or any mix of these formats. Similarly, anybody, anywhere in the world, who can connect to the Internet is exposed to speech disseminated by these platforms, regardless of where it first originated.

Social media platforms are also private spaces, as private sector companies own them. This means that governments can’t intervene directly in their operations. Governments are limited to regulating the operations of social media platforms through legislation targeting the companies that own them. Targeting the companies that provide social media platforms service is complicated by the fact that they are legally obliged to answer only to the laws of the countries where they are established or located, while the speech disseminated through their platforms can reach people connected to the Internet anywhere in the world.

A speech that is punishable by law in one country may be made public within this country through a social media platform accessible by its citizens. Both the social platform user who published the speech and the company owning the platform may not be subject to the laws of that country. This means that the usual legal procedures can’t apply in such a case.

If we put the applicability of laws aside, there is still the question of moral responsibility and legal liability. The person or entity who published the illegal speech is clearly both morally responsible and legally liable for it. It is not, however, as clear in moral or legal terms that the company owning the platform through which the speech was disseminated is responsible or liable for it. Two important legal traditions have different approaches to this question. The American legal tradition provides tech companies with full immunity to be punished for speech disseminated through their service under the famous 230 section. The European legal tradition, however, holds companies liable to speech that is disseminated through their services, in case they couldn’t prove their ignorance of it.

Morally speaking, to hold companies responsible for speech published by the users of their services may be conditioned by their knowledge of such speech and their ability to identify it as harmful hate speech. Applying the first condition, however, can’t be comprehensive as it requires the companies to monitor every single piece of content posted to their platforms. It is practically impossible to implement such a requirement, especially in the case of a platform that has billions of users like Facebook. More importantly, such comprehensive monitoring would be a form of censorship, violating the platform’s users’ right to free expression.

Provided that a company came to know of an alleged hate speech disseminated through its online platform, investigating the speech to determine that it actually constitutes hate speech is not an easy task, and there is no guarantee that the company is capable of successfully, fairly, and impartially doing it. Additionally, capability is not the only concern. There is also the legality of companies performing such investigations in the first place, performing them under rules they have set themselves, and applying sanctions they have also set. It would seem natural that a service provider may set rules and apply them within the contractual agreement it has with the service’s recipient. However, social media platform services have indeed grown into something much more than a usual service whose rules are set by a contract binding its parties.

With the great importance of online social services for people’s lives including their impact on their enjoyment of their basic rights and freedoms and the protection of their interests, it is completely valid to deal with them as public goods, as far as people’s rights are concerned. However, under the current international relations system, only independent states have the exclusive rights to manage public goods as well as to regulate companies’ operations to protect their citizens’ rights and interests. There is no globally consistent system that can set rules and monitor their enforcement across states’ borders without going through the authorities of these states. Without such a system, the private sector is the only actor capable of setting and enforcing rules for online content, and only independent states can intervene in this process by regulating the operations of these companies.

Holding Companies Responsible

Private sector companies are only sensitive to such factors that affect their profits. As far as the spread of hate speech through a company’s service may not harm its profits there is no guarantee it would be willing to provide the resources needed to identify and deal with it.

Many companies already set rules for speech published through their services and implement moderation systems to identify speech violating these rules along with sanctions against the users publishing it that are limited to the mere removal of the concerned content and the temporary or permanent suspension of the user’s account. The motive for the companies to do so is their reputation, affecting the satisfaction of their current users and the likelihood of attracting new users.

Companies also respond to requests from governments for the removal of some content based on its illegality under the laws in their respective countries. Additionally, many companies now publish transparency reports listing the number of such requests they received and the actions they have taken according to them if any.

It is clear however that whatever tech companies have been doing to moderate content on their social media platforms was not enough. The last few years have witnessed several cases where Big Tech companies failed to deal with hate speech content that had potential catastrophic consequences. Among these cases Facebook’s failure to detect hate speech against Rohingya in Myanmar, and the failure of YouTube, Facebook, and Twitter in detecting and removing a video of the Christchurch terrorist attack in New Zealand.

The failure of Big Tech companies in moderating content efficiently enough to detect highly harmful hate speech is not really a surprise, given that their moderation systems are flawed in many respects. Some of such system issues include imprecision of the rules used, inconsistency of rules enforcement, and lack of transparency concerning how these systems work.

Mostly, whether a company implements a system for identifying and dealing with hate speech has been voluntary. This, however, has begun to change in the last few years as several states around the world have already issued or are considering issuing laws for regulating the companies’ moderation of content.

The Role of the State

Traditionally, nation-states around the world have the exclusive responsibility for the protection of their citizens against whatever threatens their safety or interests. Nation-states also have the exclusive right of protecting their national security. As hate speech on the Internet can both threaten the safety of citizens and the national security of nation-states, there has been a general trend among governments around the world to try different approaches for dealing with online harmful speech.

The first option a state has for dealing with online harmful speech is applying its existent laws, or newly legislated ones to a speech made public through social media platforms by people it has jurisdiction over them, i.e., its citizens and foreign residents within its borders.

There are many issues with this approach, however.

First, a comprehensive application of domestic laws combating hate speech would require nothing short of mass surveillance of people’s online speech. This would constitute a grave violation of both the rights to privacy and free expression, besides being prohibitively costly.

However, a law that is not applied comprehensively will only be applied to cases that are discovered by accident or have stirred much attention. Both ways render the application of the law unfair and ineffective. Additionally, such laws can be used selectively for purposes other than what it is meant for.

Second, traditional laws and procedures have been proven to deal poorly with online speech, given its unique characteristics. In many cases, traditional laws may fail to apply to cases of stark harmful hate speech, while punishing people for publishing content that does not constitute hate speech or that can’t be proved to meet the conditions required for imposing restrictions on the right to free expression.

Most importantly, however, is that due to the cross borders nature of the Internet, and by extension, social media platforms, the source of hate speech may most probably be out of reach for the state’s laws and enforcement authorities.

The second approach states can use is to create legal instruments for regulating the operations of companies’ moderation systems, sometimes even if these companies do not fall under their jurisdiction. Specifically, such legislation may hold a company liable for failing to take action against hate speech disseminated through its online platform.

This last approach is the one many countries have chosen in the last few years. The most comprehensive legislation dealing with online content so far is the European Union’s Digital Services Act, which became into force on November 16th, 2022, and is scheduled to be fully implemented in February 2024. This regulation is by far the most detailed legislation that was crafted to deal with online content. It is also the most influential due to the huge size and wealth of the European market that tech companies can’t afford to lose.

Such laws, however, have legality issues. When implemented, a service provider is obliged to comply with a law that is not legally binding to it, as it belongs to a state different from the one where it is established or located. Companies may refuse to comply with such laws if they can afford to lose business in the country concerned. The maximum sanction a state may apply in the case a company refuses to comply with one of its domestic laws is to block the company’s services within its borders. Even such an extreme procedure, however, can be avoided by using technological solutions to circumvent it, such as VPN tools.

Besides companies, users living out of the country that issued the law will most likely be affected by its implementation by the service provider. Companies may choose to limit the application of the measures taken in compliance with the law to a specific geographic domain, so for instance they may block some content, or even some accounts within the borders of the country concerned. This, however, might prove to be too costly. In such cases, users may have their content removed and their accounts suspended in compliance with a law that can’t legally be applied to them. Any measures of appeal or redress set by the law will not be applicable to users outside its authority. So abroad users will be liable to sanctions, like the citizens and residents of the said country, while deprived of the tools available for these citizens or residents to appeal the sanctions or receive redress in case they were proved to have been wrongly applied to them.

Regardless of their legality when applied to companies or users not legally bound to comply with them, these laws can’t be guaranteed to be efficient enough, as the actual application of them will mostly be outsourced to tech companies. It is true that under such laws, the concerned state’s authorities as well as third-party entities, if allowed or commissioned by the law, may actively monitor the consistency of companies’ enforcement of the law. They are, however, limited to content made available within the borders of the country concerned. Compliance by the companies with the law outside the borders of the country concerned is not an issue for it, and it cannot legally sanction companies on its account. Additionally, even if the companies applied the rules of the law globally, hate speech that may cause real harm in other countries, could still not be dealt with if it did not qualify as such under these rules. So, in any case, the laws legislated by states or a group of states like the European Union, cannot address the problem of hate speech on the Internet globally, even if their extra-jurisdictional violation of international law norms was tolerated.

Conclusion: Is It Possible to Find an Alternative?

When this paper discussed the absence of an applicable international system that does not go through states’ authorities, there has been a historical exception that only lacks being established through international agreement. This exception already exists within the framework of Internet Governance institutions represented by such institutions as the Internet Corporation for Assigned Names and Numbers (ICANN), and other institutions that set the rules for the operations of software technological infrastructure of the Internet and the web. But these institutions have not originated in international agreements, and the acceptance of their authority is just a submission to a matter of fact, as they have practically pre-existed the Internet global expansion. Many governments express their wishes to replace these institutions with inter-governmental organizations like the International Telecommunications Union (ITU) which may be subject to the desires of governments and enjoy nominal independence from their collective will. The case is not different from the multi-stakeholder principle which is currently prevailing in the Internet Governance field. Many governments do not hide their resentment for the fact that this principle allows Big Tech companies to have a great influence over deciding the Internet future, which is true to a great extent, though it does not justify replacing it with one that allows governments to have such influence instead of companies. In both cases, Internet users, the party most impacted by how the Internet works, are not satisfactorily represented and thus liable to have their interests ignored.

In any case, the model of Internet Governance institutions independent of the states and even from tech companies proves that a system can be found that would set, monitor and implement global rules. Such institutions may have a better opportunity to get companies to comply with their rules compared to most of the opportunities of states in getting those companies to comply with their laws. A large company may afford to lose business in a small or low-income country, but if an international institution like ICANN applies some sanctions such as blocking its service, it would be equivalent to going completely out of the market.

If the existence of an independent international system that can regulate online content moderation is possible theoretically, it is still far from being realistic as there is no doubt that most if not all states will fight the establishment of such a system fiercely, including West countries which support the principle of multi-stakeholders in Internet Governance. When it came to their national security, these states resorted to legislating laws that they imposed on companies, using the importance of their markets. They never even thought of an alternative similar to Internet Governance institutions.

The party that stands to benefit the most from establishing such a system is undoubtedly the ordinary users of the Internet and social media platforms, but as is the case with all issues related to Internet Governance and the future, the potential of depending on the will of these users to deal with these issues or to decide on this future hang on the existence of cross borders entities that fully represent the interests of a critical mass of users. Till then, the burden of trying to defend the interests of Internet users continues to be shouldered by civil society NGOs concerned with digital rights and the related rights and freedoms.