Content moderation on social networks is becoming increasingly complex Due to the widespread use of video shorts, potentially harmful content is still being assessed live by employees

Efforts by various technology companies to identify and remove violent, dangerous or misleading content on their platforms have already proven insufficient when social networks relied mainly on text, such as Facebook and Twitter, or images, as was the case with Instagram for a long time. Today, with millions of people consuming video almost continuously on the platforms, content moderation has become an unprecedented challenge for the companies themselves. This in itself is a complex task that technology companies have often downplayed in the past, largely because of the economic outlay it requires. Hiring moderators who are familiar with the language and context in which content is created is not only complicated but also costly, not least because the mental health of employees - who are exposed to potentially traumatic contentin particular - must be protected.


In the wake of TikTok's success, platforms - led by Instagram and YouTube - have invested heavily in video shorts, so much so that the shorts have become conceptually more complex, eventually evolving into three-dimensional memes in which audio, images and text are interwoven. This makes moderation even more complex, and not just for machines. «Footage of a desolate landscape with windblown weeds rolling around would not be problematic in itself, either for a human or a machine, any more than an audio clip of someone saying, 'Look how many people love you' would be. If you combine the two to insult someone and tell them that no one loves them, a computer will never understand that», The Atlantic says. The same thing happened with a video, much shot on TikTok, of American First Lady Jill Biden visiting people with cancer. Whistles and cackles could be heard in the background, which a user had added in post-production. This is why only 40 per cent of videos removed from a social network like TikTok are edited by an AI - the other controls (and we are talking millions) are still the work of flesh and blood employees, with all the consequences.

Platforms paradoxically tend to amplify videos with disturbing or inaccurate content because when they republish them, they only evaluate the number of user interactions, not the quality of those interactions. If people have watched a video repeatedly or commented on it because they found it unfair, for example, the platform will still rate that interaction as "positive" and then risk amplifying similar content in their feed. «Prolonged exposure to manipulated content can increase polarisation and reduce viewers' ability and willingness to distinguish between truth and fiction», the New York Times reports. Journalist Tiffany Hsu, an expert on the internet and misinformation, adds that «the danger of misleading content lies not necessarily in individual posts, but in the way it further damages the ability of many users to discern what is true and what is not». It is no coincidence, then, that author Clay A. Johson, in his book Information Diet, compares the current production of low-quality information to junk food: content that is tempting but of little "nutritional value" to the mind.