Facebook now has over two billion users, and as the company continues to grow so does the social network’s public responsibility. One of the biggest concerns is creating an environment that fosters civil discourse. Handling this issue is by targeting and deleting hate speech. Facebook’s guidelines for moderating hate speech were revealed Wednesday after ProPublica investigated how and why the social network created, implemented and instituted their moderation system.
If evidence found by ProPublica is accurate, the company’s policy on how it handles hate speech may illustrate disparities in how comments about particular groups of people arguably unethical. In fact, the report asserts Facebook has built a comment moderation system that protects “white me from hate speech, but not black children.”
Facebook Moderation System
Facebook has tried to build a global set of rules on what is and what is not considered hate speech. But the company’s logic, according to ProPublica, is likely to spark outrage among people who feel that the company already fails to create a safe environment.
— Alex Hern (@alexhern) June 28, 2017
Protected Categories versus Unprotected Categories
Facebook appears to claim that you cannot, or should not, make fun of someone for things that are out of their control, protected groups, which include their race, gender, sexuality, and religious affiliation. Facebook moderators delete those types of hate speech directed towards protected categories. However, Facebook considers hate speech targeted towards non-protected classes regarding someone’s social class, job, appearance age or religion is considered okay.
For example, in the report, there is a Facebook slide asking, “Which of the below subsets do we protect?” It says female drivers, black children, and white men (the section for white men features a picture of the 90’s boy band Backstreet Boys). Can you guess what the correct answer is?
According to ProPublica, Facebook policies allowed moderators to delete hate speech against white men because they were a “protected category”. Meanwhile, the other two examples, black children, and female drivers, in the previously described slide were in a non-protected category and as a result, hate speech comments are allowed.
Ultimately, Facebook’s guidelines of protected and unprotected groups lead to political and ethical issues. If a post targets members of a protected class such as women, it will be penalized. If it targets a smaller group, an unprotected category within a protected category, such as women drivers, then it is not.
Dave Willner, a former Facebook content moderator who helped build the guidelines, told ProPublica, the social network moderating system is “more utilitarian than we are used to in our justice system … It’s fundamentally not rights-oriented.”
Facebook and the role of moderation and terrorism
Earlier this month, Facebook announced how the company is planning to fight terrorism through the social networking application. In an outline, the company pointed towards new methods to combat the threat of terrorism by using artificial intelligence.
Monika Bickert, Facebook’s Director of Global Police Management and Brian Fishman, Counterterrorism Policy Manager, said in their latest blog post, “We agree with those who say that social media should not be a place where terrorists have a voice. We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission.”