Say No to Terrorism

For years, content that promotes terrorism has thrived on social media platforms like Facebook and Twitter.

Fighting it is an uphill battle that has forced tech companies to open war rooms and hire new specialists. One solution that companies including Facebook are now betting on: machine learning. In a recent blog post, the social  giant detailed the way it's using the technology to identify content that "may signal support for ISIS or al-Qaeda."

Bot Moderators

Facebook engineered an algorithm that assigns each post a score based on the likelihood that it violates the company's counterrorism policies. If that score crosses a certain threshold, the post will be removed immediately without human moderation.

The blog post is thin on specific details about how the algorithm's scoring system actually works. That's not entirely surprising: it's a high stakes game of whack-a-mole, and Facebook isn't likely to reveal all of its secrets to the world.

Unappealing Truth

Facebook is quick to admit there is no perfect system — or at least that it hasn't found one yet. Luckily, in April it updated its appeals process, in case the algorithms flag false positives.

It's a step in the right direction — we know that neither human moderation nor machine learning algorithms alone will be enough to remove all terrorism content from social media.

READ MORE: How Facebook uses machine learning to fight ISIS and Al-Qaeda propaganda [MIT Technology Review]

More on terrorism on social media: Facebook Needs Humans *And* Algorithms To Filter Hate Speech


Share This Article