Decoding Antisemitism

Combating Online Hate and Imagery with Artificial Intelligence


Halle, Hanau, Christchurch: These cities have all become synonyms for deadly and racist attacks against minorities. What they all have in common is that the perpetrators had radicalized themselves on online platforms, where the members propagated hatred against Jews and minorities undisturbed.

Antisemitism, and hate speech in general, are being expressed more and more openly and shamelessly in the digital sphere. It has given people the opportunity to abuse and threaten minorities with a very low risk for repercussions. And, to make matters worse, hatred is being shared and spread online 24 hours a day, increasing the risk that more people will be incited to hate.

That is why the Alfred Landecker Foundation has set up “Decoding Antisemitism”, a three-year project with the Center for Research on Antisemitism at the Technical University of Berlin, King’s College London and other renowned scientific institutions in Europe. The foundation funds the project with nearly 3 million euros. While the focus lies initially on Germany, France and the United Kingdom, the research will later be expanded to cover other countries and languages.

Computers will assist the research team in decoding implicit antisemitism

Combating Antisemitism Online

Studies have shown that the majority of antisemitic defamation is expressed in such hidden ways. This is done through the use of codes (“juice” instead of “Jews”), allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images.

Implicit antisemitism is not only much harder to detect, but also harder for the state to punish. This is one reason most defamations against Jews go unsanctioned — and why the real dimensions of the problem need to be unmasked.

Innovation is required to tackle this issue because conventional programs for the detection of online antisemitism do not identify the most common forms — namely implicit defamation. An approach is needed which takes into account implicit hatred, contextual information and cultural norms.

In order to be able to recognize and combat not only explicit but also implicit hatred more quickly, an international team comprised of discourse analysts, computational linguists and historians will develop a highly complex, AI-driven program (AI = Artificial Intelligence). Computers will be “fed” the results of qualitative linguistic and visual content analysis and use these to train algorithms that are continuously tested. One of the aims is to develop an open source tool that at the end of the pilot phase is able to scan websites and social media profiles for implicitly antisemitic content.

The interdisciplinary approach, bringing together research disciplines from linguistics to antisemitism studies and machine learning, is unique to date. For the first time, researchers will take into consideration that most antisemitic abuses are expressed in an implicit way — be it because the users fear legal punishment, because they are unaware or because they want to protect their self-image, believing that they are in fact not antisemites, but simply revealing an allegedly suppressed truth.

Explore what we do

Confront the past

Combat antisemitism

Protect minorities

Strengthen democracy

Reinforce critical thinking

Share on Twitter
Twitter
Share on Facebook
Facebook
Share by email
Share-mail
Copy link
Link Copied
Copy link
Back
back arrow