The Foundation has joined forces with the “Center for Research on Antisemitism” at the Technical University of Berlin, King’s College London and other renowned scientific institutions in Europe and Israel.
In order to be able to recognize and combat implicit hatred more quickly, the international team, comprised of discourse analysts, computational linguists and historians, will develop a highly complex, AI-driven approach to identifying online antisemitism. The combination of these research disciplines is unique to date in its setup as well as in the subject matter of the analysis itself. Computers will help run through vast amounts of data and images that humans wouldn’t be able to assess because of their sheer quantity. Studies have also shown that the majority of antisemitic defamation is expressed in implicit ways – for example through the use of codes (“juice” instead of “Jews”) and allusions to certain conspiracy narratives or the reproduction of stereotypes, especially through images. As implicit antisemitism is much harder to detect, the combination of qualitative and AI-driven approaches will allow for a more comprehensive search.
Additionally, implicit antisemitism is harder to punish – and so we have seen in the past that social media companies, already found wanting when it comes to limiting hate speech on their platforms, are very reluctant to act upon such hidden hatred against Jews. The effect is that online users are emboldened to continue to spread and share their hateful messages. The problem has recently been exacerbated, as seen by the rise in conspiracy myths accusing Jews of creating and spreading COVID-19.
The Alfred Landecker Foundation wants to promote a public discourse in which hateful voices are not allowed to dominate. This is why one of the aims of the project is to develop an open source tool that can be used for websites and that is compatible with social media profiles. The idea is to support freedom of speech while making sure that anti-semites and racists don’t drive away all those interested in respectful discussions.
Dr. Andreas Eberhardt, CEO of the Alfred Landecker Foundation: “Antisemitism and hatred directed against minorities are putting the future of our open society in jeopardy. And the problem is only getting worse in the digital sphere. It’s essential that we use innovative approaches – such as using AI – to tackle these issues head on. The Alfred Landecker Foundation is committed to partnering with organisations, such as those involved with Decoding Antisemitism, that share our values to help build a future in which minorities are protected.”
Dr. Matthias J. Becker, linguist and project lead of “Decoding Antisemitism” at the Technical University Berlin: “We see that hate speech online and hate crimes are to some extent always connected. In order to prevent that more and more users become radicalized on the web, it is important to identify the real dimensions of antisemitism – also taking into account the implicit forms that might become more explicit over time.”
Dr. Daniel Allington, Senior Lecturer in Social and Cultural Artificial Intelligence, Kings College London: „Internet companies are failing to stem the tide of online hate. The task is difficult because hatred is often expressed in subtle ways and constantly changes form. But machine learning can serve as a force multiplier, extending the ability of human moderators to identify content that may need to be removed. We are looking forward to collaborating with all those involved with this project and we are grateful to the Alfred Landecker Foundation for their support – it is only through partnerships such as this, and the support they are providing, that we can hope to make progress towards protecting minorities in these hard to reach spaces.“
The focus of the project is initially on Germany, France and the United Kingdom, but will later be expanded to cover other countries and languages.