Algorithms influence large parts of everyday life: which content we see in social media, who we come into contact with online and what opinions we are confronted with – or not. For shaping democracy in the digital world, this poses a great risk because algorithms can reinforce prejudices, spread hate speech and disinformation, and advance social polarisation. The lack of transparency about how they work, especially in social media, makes algorithms a black box that has so far largely evaded external scrutiny and democratic control. The few existing guidelines have been non-binding and not widely implemented until now. In 2022, the EU's Digital Services Act allowed science and civil society to perform so-called "algorithm auditing", whereby direct access to AI systems by third parties is made possible for the first time.
This is where the project of AlgorithmWatch comes in: So-called Audits will analyse machine learning models and identify systemic risks. In the course of the project and based on initial findings, methods and governance proposals will be developed on how to enable "algorithm auditing" on a larger scale. The goal is to make this an effective tool that increases the transparency and accountability of the platforms. Therefore, the project plays an important role in holding platforms accountable and protecting democracy.