
Introduction
Gender-based discrimination remains the most prevalent and impactful form of discrimination in human societies worldwide. Although the struggle for gender equality has persisted for over a century and a half, this goal remains distant, despite what many may believe.
This assessment does not overlook the achievements made over this long period; however, it also takes into account that the struggle for gender equality remains intense and that setbacks are still a recurring reality around the world. With the emergence and growth of cyberspace, a unique arena for social struggles has taken shape—one that, through its mechanisms and dynamics of interaction, imposes a new nature on these conflicts. At the forefront of these struggles is the fight for gender equality.
The ugliest face of gender-based discrimination and of the struggle to end it is the violence directed at women and girls based on their gender. This violence has accompanied gender discrimination for thousands of years, taking on countless forms that vary in the degree of harm and suffering they inflict on women and girls. Yet the common thread remains that gender-based violence is systematic, socially reproduced, and passed down through generations.
Modern societies have inherited the phenomenon of gender-based violence, which has grown more complex alongside the increasing complexity of these societies. In the digital age, gender discrimination and the violence rooted in it have extended into cyberspace, giving rise to a highly complex phenomenon: cyber gender-based violence. The extreme complexity of this phenomenon stems from the vast number of factors that contribute to its formation.
This paper aims to focus on one aspect of cyber gender-based violence, specifically the impact of social media platform policies on its growth and persistence. Social media platforms represent the most dominant form of digital technology and internet use today. As such, they are now the most influential spaces for social interaction in individuals and communities worldwide.
The growing prevalence of gender-based violence on social media platforms is intrinsically linked to their management policies. This connection renders protective policies ineffective, ultimately leading to a failure in addressing gender-based violence on these platforms.
This paper examines the defining characteristics of gender-based violence on social media platforms, analyzing quantitative, qualitative, and foundational indicators. It also discusses how social media business models contribute to the rise of such violence and highlights the structural weaknesses in existing protective policies. Finally, the paper provides recommendations to enhance the safety of women and girls on these platforms.
Understanding the Manifestations of Gender-Based Violence on Social Media Platforms
There are several distinctive features of gender-based violence on social media platforms. This paper focuses on three main aspects, which are:
- The Quantitative-Statistical Dimension: This aspect highlights the persistent growth of such violence despite nearly two decades of concerted efforts to combat it.
- The Qualitative Dimension: This aspect sheds light on the significant difference in the nature of cyber gender-based violence compared to its manifestation in the real world.
- The Foundational Dimension: This aspect demonstrates that social media platforms have become primary cyber arenas for establishing new grounds for gender discrimination, and consequently, for the violence built upon it.
Escalating Rates of Gender-Based Violence
According to Statista, a leading global statistics platform, there has been a marked upward trend in online violence against women and girls. Between 2013 and 2023, the percentage of women and girls aged 11 to 21 who experienced abusive comments online rose significantly from 40% to 51%.
During the same period, the percentage of girls and women who received abusive content from people they know increased from 17% to 37%. This rise in cyber violence rates has extended to young girls aged 7 to 10, whose exposure to threatening behaviors and bullying doubled from 16% to 30% between 2016 and 2023.
A United Nations report confirmed that the Internet is a fertile ground for the development and growth of gender-based violence. The report cites studies conducted worldwide, estimating that between 16% and 58% of women and girls have been targeted by various forms of gender-based cyber violence. The report also emphasizes that the extension of violence against women and girls into the virtual world has amplified both its scale and the depth of its impact.
While practices of this violence began with the emergence of communication mechanisms like email and chat rooms, the rise of social media platforms, in particular, has led to the expansion of these threats.
Another UN report highlights the fact that the rapid technological advancements in technologies like artificial intelligence (AI) are strongly driving the increase in violence against women and girls, both in cyberspace and in the physical world. The growing reliance of social media platforms on algorithms, coupled with the widespread use of smartphone applications, contributes to the dissemination and reinforcement of gender stereotypes, along with gender biases and misogyny.
According to statistics from The Economist Intelligence Unit’s data center, the prevalence rate of cyber violence against women stands at 85%. This percentage represents women who have witnessed violence against other women both within and outside their immediate social circles. Meanwhile, the rate of women who reported personally experiencing cyber violence was 38%. The statistics further reveal that 65% of women reported experiencing or witnessing violence within their close social circles.
Rates of cyber violence against women varied significantly across geographic regions, with gender-based cyber violence prevalence reaching 98% in the Middle East, 91% in Latin America, and 90% in Africa, compared to 76% in North America and 74% in Europe.
What these figures reflect is that while cultural and social factors play a notable role, this role is far from being decisive in determining rates of online gender-based violence. The primary factor driving the growth of this phenomenon remains the spread and evolution of digital technologies, particularly the massive global increase in social media platform users.
The findings of numerous academic studies and reports by various human rights organizations over the past two decades consistently conclude that social media platforms have significantly contributed to the escalation of online gender-based violence rates. Masaar has previously discussed this issue in earlier papers concerning cyber violence against women and girls, as well as cyberspace sexism against women.
The current paper aims to highlight clear evidence pointing to the continued growth of violence against women and girls on social media platforms, whether in terms of its prevalence, severity, or depth of impact. This persistent growth is, in itself, a clear indicator of the failure of social media platforms’ protective policies to effectively address gender-based violence.
The Evolution of Distinct Patterns of Cyber Gender-Based Violence
Most terms used to describe forms of cyber gender-based violence—such as harassment, stalking, defamation, and privacy violations—are derived from their counterparts referring to practices occurring in the physical world. While these terms retain much of their original meaning, they often overlook the profound differences between how these actions manifest in digital environments and how they occur in the offline world.
Here, we can present a more common example: Any girl or woman might face verbal harassment, or even violations of her personal space and body while walking down a street. This is a daily reality experienced by millions of women worldwide. In some cases, the girl or woman might hope that this traumatic experience ends simply by crossing the street where the incident occurred.
When we compare this scenario with cyber harassment, we encounter a fundamentally different reality. Any girl or woman can simultaneously exist in dozens or even hundreds of virtual “streets” online, particularly across various social media platforms. Her digital presence is defined by every post on her personal account, every comment she leaves on others’ posts, and every mention of her in posts by individuals or groups she may or may not know.
Each of these instances represents an opportunity for verbal harassment and abuse in every possible form. The harassment may come from a single individual or dozens, even hundreds, of people. While the experience of harassment may fade over time as interest in the original post or comment wanes, it can also persist, resurface, and spread, particularly if the content is re-shared or referenced again in new posts. This resurgence can happen days, months, or even years later.
In the physical world, it is rare for a random harasser on the street to obtain someone’s home address or phone number to continue harming them in private. However, social media platforms directly link users’ public presence and their private accessibility. What makes this even more dangerous is that this connection can extend into the real world—harassers can track a person using information available on the platform, potentially locating their home, workplace, or educational institution.
From the previous example, the following points can be deduced:
- First, one of the terms used to refer to gender-based violence on social media platforms, such as “harassment,” does not resemble its real-world counterpart outside cyberspace in any way.
- Second, this difference directly stems from the nature of the communication, interaction, and engagement tools that social media platforms provide. These tools multiply the chances of encountering other individuals by dozens, hundreds, or even thousands of times compared to what is possible in the real world. They also increase the potential for escalating violence, from fleeting verbal harassment to the possibility of physical harm.
When terms like “cyber”, “electronic”, and “digital” are added to gender-based violence terms such as “harassment” and “stalking”, it does not simply mean the actions have moved from the real world to cyberspace. What this means is the creation of new patterns of gender-based violence that differ radically in their nature, escalation possibilities, and impact on the ability of women and girls to live their lives normally. These new patterns of gender-based violence derive their existence and nature directly from the way social media platforms operate.
The Manosphere Phenomenon
The Manosphere is defined as “a collection of websites, blogs, and online forums that promote masculinism, misogyny, and anti-feminism.” Typically, adherents of this space adopt rhetoric framed as a defense of so-called “men’s rights.” They portray their ideology as an attempt to protect rights they believe are being eroded for men, in opposition to the growing feminist movement advocating for gender equality and an end to discrimination against women.
Social media platforms have served as fertile ecosystems for the growth and proliferation of the Manosphere. In reality, the current manifestation of this phenomenon is inextricably linked to the opportunities these platforms have provided.
The phenomenon of the “Manosphere” manifests in the semi-organized efforts of groups of men who collaborate in producing and disseminating hate speech against women and attacking feminism. Some groups within the Manosphere specifically focus on attacking feminist ideas in general and rejecting the concept of gender equality between men and women as a fundamental principle.
Some groups within the Manosphere specialize in negatively addressing specific feminist issues, such as resisting efforts to combat harassment and sexual assault crimes against women. The focus of these groups varies according to local geographic and cultural contexts. For example, Manosphere groups in the Egyptian cyberspace tend to concentrate on resisting the granting of fair and equal rights for women in matters of marriage, divorce, and child custody.
A key part of the efforts of groups within the Manosphere involves collective actions that engage in various forms of cyber violence. These groups lead campaigns of defamation, cyber harassment, abuse, and threats of violence targeting feminist advocacy groups or individual women. Prominent targets of these campaigns include prominent feminists, female journalists, politicians, as well as celebrities and public figures. These campaigns rely on the tools and promotional means provided by social media platforms to maximize their reach, spread, and impact.
The growth and spread of the Manosphere phenomenon on social media platforms can be linked to one of the main phenomena rooted in these platforms: echo chambers. In reality, Manosphere groups are echo chambers where groups of men with similar views, ideas, biases, and beliefs gather.
The importance of linking the Manosphere and echo chambers lies in highlighting a fundamental truth. Social media platforms are not merely a more effective means for Manosphere groups to express their anti-women and anti-feminist ideas and rhetoric. They are not just a space where these groups practice gender-based violence against women; rather, they are, first and foremost, an incubating environment that contributes to the creation, growth, and amplification of these groups’ influence. A previous paper by Masaar on echo chambers made it clear that the primary cause behind the emergence and expansion of this phenomenon is the policies of social media platforms and the way their content recommendation and promotion algorithms operate.
Platform Policies and the Growth of Gender-Based Violence Phenomena
This section moves from indicators of the growth of gender-based violence phenomena on social media platforms to an attempt to explain this growth through two main aspects. The first aspect is the relationship between the business model of social media platforms and the promotion of gender-based violence. The second aspect is the inherent flaws in the policies aimed at addressing gender-based violence on these platforms, which make their failure inevitable. From both aspects, it becomes clear that the structural characteristics of social media platforms make the growth of gender-based violence on them, and the failure to confront it, an unavoidable outcome.
The Relationship between Social Media Platforms’ Business Model and the Growth of Gender-Based Violence
Various studies that have addressed the business model of social media platforms have used several terms to describe this model. Some of these terms, such as “Attention Economy“, refer to the central mechanism by which this model operates. Others, such as “Commodification of Personhood“, point to the fundamental commodity promoted by this model.
There is undoubtedly an integral relationship between what these terms refer to. The business model of social media platforms is based on generating profits through the extent of users’ engagement with the content provided to them. Therefore, capturing the user’s attention for as long and as deeply as possible is a central goal that platforms strive to achieve by all means. One of the most important of these means is turning users themselves into commodities that are resold to each other. The principles of applying this process, between attention capture and commodification, are strikingly simple in their broad outlines.
- Principle 1: What attracted the users’ attention before is likely to attract their attention again. To maintain the continuity of their attention on the platform, it is enough to present them with more content similar to what previously caught their interest.
- Principle 2: What attracted the attention of others who share similar characteristics with the user is more likely to attract their attention as well. As a result, to maintain his attention on the platform, it is enough to provide him with more content that has caught the attention of his peers.
- Principle 3: Humans are naturally inclined to be drawn to each other. This attraction works both positively and negatively, meaning they are drawn to those who are similar to them as well as to those who are their opposites. Therefore, having users market to each other is an effective way to capture attention for longer periods and deepen this attention by adding a personal dimension to it.
- Principle 4: Any indicator of value acts as a reward that people naturally seek to obtain. As a result, metrics such as follower count and engagement with posts serve as a motivator for users to take on the task of marketing themselves to others by producing content that captures their attention.
These four principles can explain a large part of the dynamics of interaction on social media platforms. They also help explain why these platforms serve as environments conducive to the growth of all forms of social conflict. At the forefront of these conflicts, without a doubt, is the struggle for gender equality.
Unlike other conflicts, this struggle extends beyond geographical, class, and linguistic boundaries, finding resonances between the most developed and wealthy countries and those that are the most underdeveloped and impoverished. This is reflected in the ease and spontaneity with which content produced in any country can spread across social media platforms and be recycled in the cyberspaces of other nations.
Recommendation algorithms play a significant role in enabling the widespread dissemination of content containing gender-based hate speech. This happens not because these algorithms specifically select such content, but because they prioritize what is likely to attract the attention of the largest number of users. Over time, the continuous operation of these algorithms generates a broad base of users who are inclined to engage with this type of content.
This occurs through the phenomenon of echo chambers, which create intellectually closed communities with specific shared interests. All of these communities are susceptible to extremism regarding any issue they care about. However, what distinguishes gender-related issues is the scale of the communities that can be formed around hostility towards women and feminism, extending across geographical and linguistic barriers.
Structural Flaws in Platforms’ Protection Policies
In July 2021, the companies responsible for four of the largest social media platforms—Facebook, Google (YouTube), TikTok, and Twitter—announced a set of commitments to combat abuse and violence against women and enhance their safety. The announced commitments by these companies included:
- Providing more detailed settings (such as specifying who can see, share, comment, or respond to posts).
- Using simpler and clearer language across user interfaces.
- Providing easier access to safety tools.
- Reducing the burden on women by actively working to minimize the amount of abuse they experience.
Regarding the facilitation of reporting gender-based cyber violence, these companies committed to the following steps:
- Enabling complainants to track and manage their reports.
- Providing greater capacity to understand context and/or language.
- Offering more guidance on platform policies when submitting reports.
- Creating additional avenues for women to access help and support during the reporting process.
Nearly four years since the launch of this initiative, there are no clear indicators of its success in reducing rates of gender-based cyber violence against women. On the contrary, these rates continue to rise. Moreover, the initiative lacks any framework for monitoring and evaluation. In particular, it does not include any commitments by these companies to establish mechanisms for publishing transparency reports on the prevalence of gender-based violence or on modifications made to their protection policies.
In contrast, the commitments outlined in the initiative highlight the approach adopted by major tech companies in addressing the increasing gender-based violence on these platforms. The primary focus of this approach is on “what women should do to avoid exposure to violence and to deal with the violence they are actually subjected to”. Implicitly, the commitments suggest that women should limit their interactions with others by restricting the scope of these interactions as much as possible in order to avoid gender-based violence.
On another front, the commitments indicate that when women are subjected to gender-based violence, the burden falls on them to report the abuse and to follow up on their reports to ensure the platform responds. This mirrors the legislative approach to addressing gender-based violence in the real world—an approach that has proven ineffective in combating the phenomenon over the past decades.
The commitments place no responsibility on the companies to address the phenomenon of gender-based violence on their platforms. They do not clearly define any concrete steps or enforcement measures. More importantly, these commitments lack any obligation to seriously confront the widespread misogyny and the prevalence of gender-based hate speech.
This approach reproduces the same flawed methods used to address gender-based discrimination and violence in the physical world. It essentially reduces a systemic phenomenon to its surface-level manifestations while ignoring the contextual factors that perpetuate and fuel its continued growth.
It is important to note that this is a common issue found in much of the literature addressing gender-based violence on social media platforms and the internet. In many of these works, including reports issued by the United Nations, misogyny and gendered hate speech are counted as forms of violence against women and girls. This reflects a serious flaw, as it confuses cause and effect. The truth is that gender-based discrimination, which manifests in the form of misogyny and gendered hate speech, is the context that produces all forms of gender-based violence—it is not merely one of them.
Therefore, the protection policies against gender-based violence adopted by the technology companies responsible for social media platforms suffer from a fundamental structural flaw: they ignore the context of misogyny and gendered hate speech. As a result, the approach these policies take indirectly supports an environment of discrimination against women by placing the responsibility solely on them, first for being subjected to gender-based violence, and second for confronting and dealing with it.
In other words, it can be said that protection policies against gender-based violence on social media platforms are, in themselves, discriminatory against women. As a result, not only do these policies fail to provide adequate protection for women against the violence directed at them, but they also indirectly contribute to the growth of this violence by placing the burden of responsibility on women themselves.
Alternatives for Addressing Gender-Based Violence
The following section of the paper explores how the governance policies of social media platforms can become a positive factor in confronting gender-based violence and in providing better protection for women and girls, rather than contributing to its growth.
What the previous sections have made clear is that there is a strong correlation between the profit-driven business model of social media platforms and the persistence and growth of gender-based violence on these platforms. Realistically, it is unlikely that tech companies will voluntarily choose to alter their business models. Nor is it plausible to expect them to implement protective policies that may negatively impact their profits. However, imposing such changes on them is possible.
At the end of the day, paying taxes reduces profits, but can still be enforced. Similarly, there are realistically only two viable options for attempting to limit the impact of social media platform policies, and these two options are the focus of this section.
Imposing Better Protective Policies on Platforms
The approach of improving protective policies for women and girls against gender-based violence on profit-driven social media platforms depends on answering a number of key questions.
Is There Room to Improve Existing Protective Policies?
There is certainly room to improve current protective policies. However, such improvements are unlikely to yield a significant impact under the existing business model of social media platforms. Realistically, no protective policy can halt the growth of sources of gender-based violence on these platforms.
What such a policy can achieve is limiting the expression of this violence and preventing its escalation. As a first and essential step, the approach to formulating protective policies should be revised so that the burden of implementation falls more heavily on platform management, rather than on actual or potential victims. Moreover, protective policies addressing gender-based violence on social media platforms should be based on proactive intervention and initiative on the part of the platforms themselves.
Protective policies can also be more effective in addressing well-defined communities, such as pages, groups, and accounts, that can be shown to promote gender-based hate speech and engage in gender-based violence. Most of these communities use identifying names that include indicative words and phrases, revealing their purpose.
Tracking such communities is technologically feasible and is, in fact, already implemented in the context of counter-terrorism policies adopted by most major social media platforms. In all cases, however, adequate safeguards must be in place to protect the right to freedom of expression when designing and implementing such policies.
The use of artificial intelligence tools and models can yield satisfactory results in detecting content that contains hate speech or constitutes a form of gender-based violence. However, these models must first be effectively and efficiently developed and trained through well-regulated processes to ensure they do not infringe on the rights of any party.
On another front, it is possible to prevent the escalation of gender-based violence by monitoring indicators that reveal the potential for such escalation and addressing them before they materialize. Artificial intelligence models can also be used for this purpose by training them to analyze and detect escalation indicators and link them to temporary or permanent preventive measures. Additionally, default settings can be configured to adopt the highest protection options by default, making stronger safeguards the initial choice, one that users can later adjust according to their preferences.
Can technology companies voluntarily and seriously improve their protective policies?
The answer is no. It is unrealistic to expect technology companies to willingly sacrifice any portion of their profit-generating capacity. The mechanisms of the capitalist market prevent this. The only scenario in which a profit-oriented company might forgo some of its profits is if the alternative would be sacrificing a greater amount of profit.
The truth is that any effective protective policies against the various forms of online gender-based violence will inevitably reduce attention metrics for some of the most widely circulated types of content on social media platforms. This is a reality that tech companies are unwilling to acknowledge. Instead, they prefer to conceal it behind claims of protecting the right to freedom of expression, even though there are clear standards for distinguishing between hate speech and incitement to violence. Moreover, there are numerous procedural safeguards available to address cases where this distinction is not sufficiently clear.
Can tech companies be compelled to implement more effective protective policies?
The answer to this question can rely on established precedents. When certain countries imposed regulations to address speech supporting terrorism, often based on classifications lacking solid evidence, most social media platforms complied with these regulations. This was achieved, in many cases, without the need for detailed legislation on the matter.
It is important to stress that using this example does not imply advocating for the implementation of protective policies against gender-based violence in the same legally unbound or unsound manner. The example is cited solely to demonstrate that technology companies can comply with protective regulations, even those that may impact their profits, when there is sufficient political will to enforce them.
What is needed is the implementation of protective regulations within a constitutional and legal framework that ensures genuine safeguards for both the effectiveness of required protections and respect for the right to freedom of expression and privacy. It must be emphasized that this is feasible and implementable, while acknowledging that it may not necessarily be perfect or comprehensive, yet it could undoubtedly make a significant impact. Even if such an impact is limited, it would still mean a better reality for potentially hundreds of thousands or even millions of women and girls worldwide.
Encouraging a Shift to Platforms with Alternative Business Models
There are social media platforms that do not follow the profit-driven model based on the attention economy or the commodification of personal identity. The most prominent examples fall under what is known as the Fediverse. Undoubtedly, adopting a different business model has a positive impact on the prevalence of gender-based violence, but it is not sufficient to eliminate such violence entirely.
Gender-based violence, misogynistic hate speech, and hostility toward feminism are not expected to disappear entirely from social media platforms that adopt alternative business models. However, the existence of such alternative platforms undoubtedly presents an opportunity to build safer online communities for women and girls.
Such platforms may be less likely to encourage the formation of echo chambers to the same extent as mainstream ones. Instead, they can promote users’ continuous exposure to a wide range of perspectives and stances on controversial issues, including gender equality. This can significantly reduce the intensity of polarization. Moreover, this tendency can be further supported by the active presence of feminist, human rights, and other advocacy organizations on these platforms, as they are likely to find fairer opportunities to amplify their voices and those they represent.
Alternative social media platforms abandoning the profit-driven business model of their mainstream counterparts do not mean abandoning tools that enhance user experience, including content recommendation algorithms and others. What these platforms can realistically offer, however, is, first and foremost, transparency regarding how such algorithms function and the expected outcomes of their operation.
Second, users should have the freedom to choose which algorithms they prefer to use for receiving content recommendations that align with their interests. And finally, there must be safeguards that establish acceptable limits on what algorithms can and cannot do. Specifically, there should be restrictions that prohibit certain types of algorithms altogether, while setting clear boundaries that permitted algorithms must not cross.
It is often noticeable that alternative platforms are frequently overlooked as a potential approach to addressing the growing issue of gender-based violence on the most widely used social media platforms. This omission stems from a gap in recognizing the responsibility of the dominant social media business model for gender-based violence, a gap that exists between the academic sphere and the operational frameworks of international organizations.
Reports from international organizations, including UN agencies, notably omit any mention of the direct link between social media platforms’ business models and the increase in gender-based violence, a connection consistently emphasized in academic research. In contrast, civil society organizations tend to vary in how clearly they articulate this link in their reports. Undoubtedly, this situation must change before these institutions can play a meaningful role in encouraging internet users to choose social media platforms that operate under alternative business models.
Conclusion
This paper argued that there’s no meaningful hope of overcoming gender-based violence on social media platforms without abandoning the business model used by the most prevalent versions of these platforms. Given that a significant migration of users to alternative platforms with different business models isn’t expected in the near future, it’s an unavoidable conclusion that gender-based violence will continue to grow on social media platforms for the foreseeable future.
The paper aimed to shed light on its hypothesis using three distinctive features of gender-based violence on social media, which were discussed in the first section. The second section provided an analysis of these platforms’ business model and its responsibility for the rise of gender-based violence, alongside an examination of inherently flawed policies meant to combat it, which are thus destined to fail.
Finally, the paper explored two approaches to improving safety for women and girls on social media platforms. The first involves mandating more effective protective policies, and the second encourages migration to platforms with alternative business models that don’t foster the growth of gender-based violence and allow for more effective intervention.