AlgorithmWatch
Auditing Algorithms for Systemic Risks

Evaluation and Analysis of algorithms and their effects on society


The Alfred Landecker Foundation is funding the project "Auditing Algorithms for Systemic Risks" by the organisation AlgorithmWatch, which monitors and analyses algorithms and their impact on society. The goal is to create more transparency and democratic control of automated decision-making processes.

Footnotes
Expand Expand Collapse Collapse

Algorithms influence large parts of everyday life: which content we see in social media, who we come into contact with online and what opinions we are confronted with – or not. For shaping democracy in the digital world, this poses a great risk because algorithms can reinforce prejudices, spread hate speech and disinformation, and advance social polarisation. The lack of transparency about how they work, especially in social media, makes algorithms a black box that has so far largely evaded external scrutiny and democratic control. The few existing guidelines have been non-binding and not widely implemented until now. In 2022, the EU's Digital Services Act allowed science and civil society to perform so-called "algorithm auditing", whereby direct access to AI systems by third parties is made possible for the first time.

This is where the project of AlgorithmWatch comes in: So-called Audits will analyse machine learning models and identify systemic risks. In the course of the project and based on initial findings, methods and governance proposals will be developed on how to enable "algorithm auditing" on a larger scale. The goal is to make this an effective tool that increases the transparency and accountability of the platforms. Therefore, the project plays an important role in holding platforms accountable and protecting democracy.

Explore what we do

Remember the Holocaust

Fight antisemitism

Strengthen democracy

Share on Twitter
Twitter
Share on Facebook
Facebook
Share by email
Share-mail
Copy link
Link Copied
Copy link
Back
back arrow