Novel research with real-world impact

How HateLab's dashboard facilitates near-real time detection & response to online hate speech


Online hate has been increasing tremendously in recent years. A dashboard by HateLab has propelled a machine learning and human intelligence approach to monitor hate speech. Prof. Matthew Williams, founder and director of HateLab, dives into how monitoring this social media data has helped to deter hate speech online as well as crimes offline.

Across much of the western world, the number of police-recorded hate crimes has been increasing in recent years, and the rise in online hate requires law enforcement, governments and civil society organisations to address the problem both offline and online. This rising tide of hate on both fronts resonates with me. Back in the late 1990s, I was a victim of homophobic hate crime on the streets and hate speech in an online chat room. These terrible experiences changed the direction of my career - instead of becoming a journalist as planned, I became a criminologist so I could discover the roots of hatred and why I was targeted by my attackers.

Over my 20-year career, I have studied hate in all its forms, but most recently my attention has been on how new communications technologies have transformed hate in the modern world. I founded HateLab in 2016, an academic hub for data and insight into online hate. HateLab’s core mission is to develop and democratise technology that enables the monitoring and countering of online hate speech and divisive disinformation. The lab has developed its cutting-edge Dashboard which has generated vital evidence to inform policy and operational decisions on the prevalence, impact, and prevention of these online harms.

Drawing on pioneering academic research, and novel technology-for-data exchange partnerships with anti-hate organisations, HateLab has developed an unmatched machine learning+human intelligence approach to monitoring hate speech. Our algorithms, which are routinely updated with insights provided by hate expert, are able to cope with the unprecedented scale and speed of social media data that have hitherto been a barrier to monitoring efforts outside big tech. With our proprietary technology, we are able to monitor hate speech and divisive disinformation across a range of social media, including Twitter, Reddit, 4chan and Telegram.

With funding from the UK Economic and Social Research Council and the Alfred Landecker Foundation, we have been able to get our technologies into the right hands. The Dashboard has been embedded within a range of civil society organisations which have a remit to protecting minorities, depolarising debates and strengthening democracy.

The use of the HateLab Dashboard has led to novel research findings and real-world impacts. Modern forms of hate seem to be highly sensitive to ‘trigger events’. Our research has found terror attacks, court cases, sporting fixtures, political votes and a myriad of other events act as ‘releasers’ of prejudice for a limited period of time. These trigger events can bolster the hate of members of the extreme-right wing, but perhaps more worryingly, can also ‘activate’ the average person to express their usually deeply suppressed prejudices. While their actions may not manifest in taking to the streets to commit hate crimes, they might log onto Twitter and post content that is grossly offensive. These ‘one-off hateful tweeters’ get caught up in the online frenzy of negativity that tends to follow events of national and international interest. Using the HateLab Dashboard, we have found evidence that hateful posting behaviour by the ‘average’ social media user who has been activated by a trigger event has a ‘half-life’. Activation is short-lived, and quickly after the inciting event, the hateful tweeting stops within a few days.

HateLab research has also found that those who post online hate speech can be placed into a typology. At the top of the hater hierarchy is the mission hate-poster. They tend to specialise in hateful posts and are morally driven, seeing themselves as tasked with a ‘mission’ to subjugate certain minority groups online. Retaliatory hate-posters are part-time haters who are triggered by an event to feel threatened and afraid. Defensive hate-posters are activated when they feel that their territory or moral space is being invaded or threatened. Finally, Thrill-seeking hate-posters may not hold hateful attitudes towards their targets, and instead may be motivated by their peer group and a desire to be accepted. There are some hate-posters who move between these types. For example, a retaliatory hate poster can escalate to a mission hate-posting pattern, and a defensive hate poster can de-escalate to a thrill-seeking hate-posting pattern. Knowing that not all hate-posters are the same is important for evaluating which are the most suspectable to counter-hate interventions.

Most statistics on online hate significantly underestimate the extent of the problem, as most incidents go unreported. The HateLab Dashboard, by monitoring in real time and not having to wait for victims to file a report, reflects a direct observation of prevalence. It is designed to facilitate near-real time detection and response to online hate speech, including targeted counter-speech at repeat perpetrators and pre-empting outbreaks of hate crime on the streets. Importantly, HateLab research has found that counter-hate speech targeted at the most suspectable (e.g. non-mission based hate-posters) is effective in stemming the production of online hate.

Maybe most importantly, HateLab research has found that online hate speech predicts offline hate crimes in the UK, corroborating similar findings from studies conducted in Germany and the United States. While it is perhaps clear that real-world “trigger” events (such as Covid, Brexit, and speeches by politicians and public figures) can give rise to waves of online hatred, it is perhaps less obvious that a certain level and timing of online hate speech might be associated with, and contribute to, higher levels of physical violence. Knowing this pattern which connects these “trigger events” to both online and offline violence exists is key to protecting minority communities, and HateLab’s ambition to is to further harness technology to assist organisations to predict and then safeguard.

Supported by generous grant funding, we currently provide the Dashboard to a small group of civil society organisations. We have great ambitions for the future. We want to grow our list of civil society organisations that can access our technology so that we can assist in the creation of a world-wide network to monitor and counter online hate speech and divisive disinformation, and in doing so, hold big tech to account.

Matthew Williams is founder and director of HateLab, and author of The Science of Hate, published by Faber and Faber. He is Professor of Criminology at Cardiff University and is widely regarded as one of the world’s foremost experts in hate crime. He advises and has conducted research for the UK Home Office, the Ministry of Justice, the Foreign, Commonwealth & Development Office, the US Department of Justice and Google, among others. His research has appeared in documentaries for BBC One (Panorama, Crimewatch), BBC Two, BBC Radio 4 (Today, File on 4), ITV (Exposure), CBS and Amazon Studios, and in major publications including the Guardian, the Independent, the Times, the Herald, the Los Angeles Times, Scientific American and New Scientist.

Explore what we do

Confront the past

Combat antisemitism

Protect minorities

Strengthen democracy

Reinforce critical thinking

Share on Twitter
Twitter
Share on Facebook
Facebook
Share by email
Share-mail
Copy link
Link Copied
Copy link
Back
back arrow