On Thursday, Facebook published a post describing its latest efforts to combat terrorism in its platform using artificial intelligence and help from counterterrorism experts. The new security measures are meant to keep dangerous content from reaching users’ newsfeeds.
The post comes as the latest update from the social network about their ongoing content policing efforts across platforms. Facebook is the priority right now for its extensive user base, but policies and techniques that work there usually make their way into Instagram and WhatsApp.
Right now, the social giant is focusing on terrorist groups based in the Middle East like ISIS and Al Qaeda, but eventually, they hope these tools will be good counterterrorism measures against any similar organization.
Facebook is teaching AIs everything they need to know about terrorism
One of the leading technologies used by Facebook to block content that promotes terrorism is artificial intelligence. Machine learning models can detect and remove terrorist content from the network faster and more efficiently than human workers.
Facebook’s AI has been trained with a full data feed of comments, posts, photos, and videos that depict terrorist activities or have any affiliation with them. This way, it knows what to look for and remove in future scans.
For example, it matches images to previous posts and samples that have all the marks of a publication with potential depictions of extremism. It also studies language patterns, so it can understand when someone is promoting violent acts or looking to ignite controversy.
Pages, groups, and overall clusters that may foster terrorism are being targeted, looking for new accounts that might seem suspicious or strings of connections among people that the FBI has labeled as a person of interest.
Facebook hired 150 counterterrorism experts
Of course, technology can’t do it all, and experts step in where machines can no longer discern or make sensible decisions based on analyzed data. Sometimes, terrorist cells are not that evident, and algorithms just can’t detect them.
That’s where AI shifts and becomes a tool to keep learning from real-world threats and incidents such as the recent London attack. Sorting through that data helps machines learn better how to differentiate whether it’s a false alarm.
More than 150 counterterrorism specialists who speak over 30 languages work tirelessly at Facebook to analyze incoming data from the platform as well as reports and reviews from the community itself.
Outside of that scope, Facebook has made strategic partnerships with governments and industry leaders to join forces in the fight against terror. Microsoft, YouTube, and Twitter have all pledged to help and cooperate independently.
The organization’s commitment lies on fully supporting law enforcement agencies by providing any data they require, promoting encryption to help secure users, and implementing these security measures in sister platforms like WhatsApp and Instagram as soon as possible.