After receiving criticism about social media platforms not doing enough to prevent the sharing of extremist content, Facebook has revealed plans to do more to help in the fight.
“Tragically, we have seen more terror attacks recently,” Monika Bickert, the head of global policy management at Facebook said. “As we see more attacks, we see more people asking what social media companies are doing to keep this content offline.”
Detailing its plan in a blog post on Thursday, Facebook hopes that the AI system will eventually teach itself how to identify key phrases, images and pages associated with known terrorist groups.
“Ideally, one day our technology will address everything,” Ms. Bickert said. “It’s in development right now.” But human moderators, she added, are still needed to review content for context.
Currently, Facebook employs a team of 150 specialists (working in 30 different languages) conducting the reviews.