The list of negative consequences of how dominant online platforms shape our experience online is neither short nor trivial. From exploiting users’ vulnerabilities, triggering psychological trauma, depriving people of job opportunities to pushing disturbing content to others, these are just some examples. While members of the European Parliament debate their position on the Digital Services Act, 50 civil society organizations from all over Europe, urge them to ensure protection from the harms caused by platforms’ algorithms.
Ad delivery algorithms and recommender systems are responsible for what we see once we visit Facebook or YouTube. Their code may be complicated – it is artificial intelligence after all – but their job isn’t. The goal is to maximize the platforms’ profits from surveillance-based advertising. And that translates to keeping the user on the platform so that they can watch more ads while leaving more and more traces to be collected by data-hungry algorithms.
But you can't make an omelet without breaking eggs. In this case, the eggs being users’ self-image (affected by the algorithm’s choice of photos on Instagram), the quality of the public debate (recommender systems notoriously promote divisive, sensationalist content), or access to job offers (data-driven ad delivery algorithms, which select viewers from larger sets of eligible targets, have been shown to discriminate against people based on gender, race, or age).
Platforms know more about their users than they tell them. Every bit of a users’ online activity, on and off the platform, is used to make predictions about them in order to determine the content they will see online, or not. Advertisers may not intend to discriminate against anyone, but the algorithmic fixation on campaign targets can have that effect.
The pile of evidence on the harmful consequences of algorithms used by large online platforms is growing – although investigating them is difficult due to pervasive opacity.