Gender based online violence is silencing women, and it is not a “thicker skin” that they need

Illustration by Jens Bonnke

What was long suspected and a gut feeling of many internet users is becoming a proven reality with every study that is conducted and every whistleblower that comes forward: Girls and women are heavily affected by online violence, and Big Tech’s business model fuels it.

Young women are especially affected - dick pics, sexualised insults and threats (the gravity and detail of which we will not examine here) have become a normal part of their daily lives. We look at the upcoming EU online platform regulation – the Digital Services Act – to see what the needs of those affected by online violence truly are and how to make it right.

Attacks on women are more extreme and highly sexualised

HateAid is a German, nation-wide consultation centre for victims of online violence. We offer litigation financing to facilitate affected persons access to justice - something that, generally speaking, in our field, is pursued rather rarely, mainly due to high costs and lengthy proceedings. While in our consultation, women make up about 62 % of our clients. In our litigation financing, the percentage is substantial - 73% of our clients who seek litigation support are women.

What hides behind these numbers is that content targeted at women is more extreme and more often illegal. Along with this, there are fundamental differences in the online violence that women and men receive on the internet. While heterosexual, white, cisgendered men are mostly attacked because of their political views or other matters of public interest they engage with, attacks on women rarely contain any factual context or reference to positions they take.

Instead, women are attacked merely because of their gender, their appearance, and all that is highly sexualised and objectifying. This is the fundamental difference between being a woman or a man online: in a digital world, the basic achievements of feminism and equal rights movements are questioned daily.

An EU-wide problem of silencing online

1

Reset. Internet for Democracy / Pollytix – strategic research: Hass in Sozialen Medien Bundesweite repräsentative Befragung von wahlberechtigten Internetnutzer:innen im jule 2021, https://pollytix.de/veranstaltung/umfrage-zu-hass-im-netz/.

2

Plan International Deutschland e.V.: „Free to be online? Erfahrungen von Mädchen und jungen Frauen mit digitaler Gewalt“, Mädchenbericht 2020, www.plan.de.

3

https://www.idz-jena.de/forschung/hass-im-netz-eine-bundesweite-repraesentative-untersuchung-2019/

Footnotes
Expand Expand Collapse Collapse
1

Reset. Internet for Democracy / Pollytix – strategic research: Hass in Sozialen Medien Bundesweite repräsentative Befragung von wahlberechtigten Internetnutzer:innen im jule 2021, https://pollytix.de/veranstaltung/umfrage-zu-hass-im-netz/.

2

Plan International Deutschland e.V.: „Free to be online? Erfahrungen von Mädchen und jungen Frauen mit digitaler Gewalt“, Mädchenbericht 2020, www.plan.de.

3

https://www.idz-jena.de/forschung/hass-im-netz-eine-bundesweite-repraesentative-untersuchung-2019/

As a result, we live in a world where, according to recent studies and surveys, every third woman in Germany aged 18 to 34 has already been sexually harassed online, but only one in ten men in the same age group1. According to another study, 70 % of young women and girls are victims of online violence, and 55 % have been sexually harassed online.2

According to our own findings in a recent EU-wide survey, more than 90 % (!) of young female respondents between 18 and 35 years of age have witnessed online violence and about 50 % have been affected themselves. This is the internet that young girls and women grow up with – they are sexually harassed with messages and unsolicited pictures of male genitalia, insulted in the comments sections of social media platforms, objectified in rape fantasies and in manipulated pictures, and consequently: silenced. Self-silencing has become a way to protect oneself in the online environment.

This silencing effect of online violence is not limited to victims. We are not immune to what others experience online. More than half of internet users in Germany no longer dare to express their political opinions online at this time or participate in political debates3. According to our findings, this is not exclusively German, but a Europe-wide phenomenon.

People vs profit dilemma: Can private companies make ethical decisions on their own?

4

The Facebook Files, Part 4: The Outrage Algorithm, https://www.wsj.com/podcasts/the-journal/the-facebook-files-part-4-the-outrage-algorithm/e619fbb7-43b0-485b-877f-18a98ffa773f

Footnotes
Expand Expand Collapse Collapse
4

The Facebook Files, Part 4: The Outrage Algorithm, https://www.wsj.com/podcasts/the-journal/the-facebook-files-part-4-the-outrage-algorithm/e619fbb7-43b0-485b-877f-18a98ffa773f

When examining possible solutions, there are several factors and actors that need to be considered: a lack of law enforcement, different society standards in the online world, media competencies, and most importantly: online platforms.

Online platforms are private, for-profit companies that were built to grow their wealth through advertisement and not to ensure a safe and democratic experience online – this has been a focus of recent revelations by whistleblower Frances Haugen. When faced with a choice in which one option favours public safety and the other maximises profit, the odds are not in public’s favour. Facebook’s decision to run an algorithm that was proven to drive (negative) user engagement while simultaneously amplifying hateful and divisive content is a recent example of the profit-versus-people dilemma that private companies face4.

This is one of the reasons why many legislators across the globe are so eager to pass laws that would lead to more transparency and create standards for tech companies, hoping to reverse, or at least control, the damage to public safety and human rights that are at risk.

The EU’s Digital Services Act (DSA), the UK’s Online Safety Bill, and other draft legislative acts are real attempts at platform regulation. Some prior attempts to halt the spread of online harm on social media platforms were mostly voluntary, such as the EU’s Code of Conduct against illegal hate speech online and the Code of Practice on Disinformation. Although the evaluation of these voluntary, self-regulatory efforts draws a largely positive conclusion, institutions have acknowledged that the issue is much bigger and goes beyond their ability to self-regulate or the ability to make decisions that prioritise people over profit.

Protection for­­­­­ victims is inadequate in the DSA

Some countries, such as Germany, Austria and France, have undertaken their own efforts to legislate Big Tech but considering the fact that the internet transcends national borders, this

is only a temporary patch on a gaping wound. This, and the struggle to avoid a fragmented digital market, has motivated European legislators to act and bring two pieces of legislation to the table – the previously mentioned DSA, and the Digital Markets Act, which would address gatekeepers. For a larger context, we would like to highlight two further proposals that can and should complement the DSA in fighting gender-based online violence.

One of these approaches is the addition of hate crimes to the list of European crimes, aiming to achieve a minimum standard of criminal offence sanctions for hate crimes that applies to all EU Member States. The other is a long-awaited proposal for a directive on gender-based violence, including online violence.

Amongst all of these proposals, the most revolutionary is the DSA, which is currently being negotiated in the European Parliament and the Council of the European Union. It is also being hailed as the new “constitution of online platforms” in Europe. Expectations from all actors involved are sky-high.

Sadly, one element that has fallen short in the initial proposal is the protection of victims from online violence. This is no minor oversight: referring to the current statistics, women are impacted negatively at a higher rate. The focus of this proposal instead lies on the creation of a very complex compliance and oversight regimen and protection against wrongful removal of content, known as ‘over-blocking’. While the Commission has proposed several safeguards to avoid over-removal, including complaint mechanisms for cases when platforms wrongfully remove content, users would not be given the same opportunity to challenge platform decisions in cases of inaction.

How to make the DSA hate-proof?

5

Grenzenloser Hass im Internet – Dramatische Lage in ganz Europa, 2021, HateAid. https://hateaid.org/wp-content/uploads/2021/11/HateAid-Report-2021-DE.pdf

6

Where Facebook Spends to Tackle Hate Speech, 2021, Politico, Digital Bridge. https://www.politico.eu/newsletter/digital-bridge/google-antitrust-finale-online-content-enforcer-transatlantic-lawmakers/

7

Grenzenloser Hass im Internet – Dramatische Lage in ganz Europa, 2021, HateAid. https://hateaid.org/wp-content/uploads/2021/11/HateAid-Report-2021-DE.pdf

Footnotes
Expand Expand Collapse Collapse
5

Grenzenloser Hass im Internet – Dramatische Lage in ganz Europa, 2021, HateAid. https://hateaid.org/wp-content/uploads/2021/11/HateAid-Report-2021-DE.pdf

6

Where Facebook Spends to Tackle Hate Speech, 2021, Politico, Digital Bridge. https://www.politico.eu/newsletter/digital-bridge/google-antitrust-finale-online-content-enforcer-transatlantic-lawmakers/

7

Grenzenloser Hass im Internet – Dramatische Lage in ganz Europa, 2021, HateAid. https://hateaid.org/wp-content/uploads/2021/11/HateAid-Report-2021-DE.pdf

What happens when you become a victim of defamation, threats, insults, dissemination of manipulated pictures, or fake profiles are set up in your name to spread lies and platforms don’t act? This question, which is relevant to millions of users, is simply not addressed in the DSA so far. It is not a “thicker skin” that vulnerable users, especially women, need, but effective legislation. To effectively protect women from online violence, we must address the following aspects:

1. Remove illegal content

So far, the response of platforms to the reporting of stolen nude images, rape threats, and unlawful sexist insults is completely unreliable and arbitrary. When nothing happens, the only way for users to enforce their rights is to hire a lawyer. It must be made mandatory that platforms take responsibility for removing illegal content when it is reported by users. According to our findings, around 80% of users think that platforms do not do enough to protect them from online violence and that laws are needed to control the platforms5 – this is a clear sign that users want regulation.

It’s necessary to add that automated assessment is prone to error and most Big Tech budgets that are set to deal with hateful content are applied to English speaking markets6, treating hundreds of millions of non-English speaking users as second-class. This shows a clear need for sufficient resources for human content moderators.

2. Provide real options for redress and access to justice

First, when installing an internal complaint mechanism, both options should be available to deal with cases when platforms over-remove and when they do not take action. Hopefully, in the near future we can learn from and draw some conclusions from the German NetzDG law which recently added this option.

Second, it is often said that the platforms are not the correct institutions to decide what is illegal or not. This claim is correct, but there is no point in referring users to courts for judicial review of content decisions if said courts remain inaccessible. Along with this, many users are not educated on their options for judicial redress; going to court means risks of high-cost (at least the amount of an average monthly salary in Germany), little security about the outcome, and waiting up to one year for a decision. For these reasons, the DSA should include a provision for easy fast-tracking of gender-sensitive access to legal processes for all users.

This does not start or end in court. Many users already fail to initiate legal proceedings because platforms located in other countries are not reachable. The process is grueling: translation of documents and the struggle of delivery of documents to other countries. This can be simplified and made user-friendly by installing a point of contact for all online platforms in every Member State that is accessible to users in one of the official languages of that Member State.

3. Deal with the business model

The basic business model must be challenged: An increase in usage means an increase in data collection, which then correlates to an increase in targeted advertisement for users. Automated recommendation systems are trained to amplify hate speech and inflammatory content, thus sustaining user engagement by drawing them to more extreme content. Intentionally or not, platforms are earning money with hate. According to our findings, a vast majority of users – around 80% - want to have control over their newsfeeds7. Transparency regarding recommendation systems and algorithms is vital along with giving control over their news-feeds back to users.

Action you can take: #makeitssafe

The DSA is a once-in-a-generation opportunity that we cannot afford to waste. We have to use this opportunity to its fullest to protect users and our democracies from online violence and the detrimental polarisation that online platforms fuel.

From our findings and our work with victims of online violence, we see that this resonates with many users.

Take an action today, sign our #makeitsafe petition:

https://hateaid.org/petition/stop-violence-against-women-online-makeitsafe/

Explore what we do

Confront the past

Combat antisemitism

Protect minorities

Strengthen democracy

Reinforce critical thinking

Share on Twitter
Twitter
Share on Facebook
Facebook
Share by email
Share-mail
Copy link
Link Copied
Copy link
The Landecker Digital Justice Movement
back arrow