Facebook Reveals Their Plan to Combat Terrorism

When a typical user uses Facebook, it may look like Facebook is not doing much. But, behind the scenes, programmers have developed technology to deliver targeted advertising, relevant content, and information to keep you engaged and returning to the application. With 1.9 billion users worldwide, some of that user-generated content contains violent images, profanity, and even extremist views. So Facebook is revealing how the company is planning to fight terrorism via the social networking application.

Facebook released a new outline that points out the new methods on how to combat this threat by using artificial intelligence and other software.

Monika Bickert, Facebook’s Director of Global Police Management and Brian Fishman, Counterterrorism Policy Manager, said in their latest blog post, “We agree with those who say that social media should not be a place where terrorists have a voice. We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission.”

Facebook Startup
[2016-12-26] Facebook Headquarters, 1 Hacker Way, Menlo Park, California, USA. Facebook “like” sign at the entrance sign board and visitors taking photo against it are in this photo (achinthamb / Shutterstock, Inc.)

Facebook will use artificial intelligence to eliminate terrorism on the website. Artificial intelligence is currently being used in image-matching to identify whether an uploaded photo or image is the same as one used by terrorists. They are also creating technology that understands the text, which could recognize propaganda or other language used by terrorists.

Ultimately, Facebook is trying to identify and remove terrorist clusters by eliminating Pages or groups, or determining if an account is connected to other accounts that have already been deleted for terrorism. The company is also actively removing fake accounts and developing artificial intelligence to handle other issues on the platform.

That said, artificial intelligence cannot do everything. Facebook pointed out in an example, “A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story.”

Consequently, Facebook will continue to hire specialists to identify flagged content. The social network enterprise noted they are not alone and the company is sharing their information with Microsoft, Twitter, and YouTube.