AI tools have become essential given the vast amount of content posted daily.

On average, every minute Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload over 500 hours of new content. The volume of data generated is growing exponentially and is currently estimated at 120 zettabytes every day. A vast amount of terrorist content is posted across the online ecosystem.

While there has been a degree of automation to detect terrorist content, AI tools have the potential to further improve content moderation. However, the report contextualises the effectiveness of AI and how it should be used effectively:

“As important as automation is, given the sheer volume of (terrorist) content posted online, both matching-based and classification-based tools have their limitations. Their use must best supplemented by human input, with appropriate oversight mechanisms in place.”

The report found that most automated content-based tools rely on either matching images/videos to a database or using machine learning to classify content. However, these approaches have shortcomings, including difficulties compiling suitable training data and algorithms lacking cultural sensitivity.

To address this, the report recommends “developing minimum standards for content moderators, promoting AI tools to safeguard moderator wellbeing, and enabling collaboration across the industry.”

The report’s recommendations come as platforms adapt to the EU’s 2021 Terrorist Content Online Regulation mandating swift takedown of terrorist content. While many platforms are expanding automated detection to meet legal requirements, the report cautions that exclusively automated enforcement risks disproportionate impacts on marginalised groups and activists. It calls for human oversight and appropriate accountability mechanisms.

Leave a Reply