How Facebook monitors harmful content: 5 takeaways

Facebook CEO Mark Zuckerberg speaks during the F8 Facebook Developers Conference in San Jose, Calif., May 1, 2018. (Justin Sullivan/Getty Images)

by Andrew Silow-Carroll

(JTA) — Facebook founder Marc Zuckerberg insists that the massive social network is a force for good, but the company keeps getting called out for being slow or unable to curb the worst tendencies of users and abusers. In the latest probe of its practices, The New York Times scrutinizes Facebook’s attempts to police its users and remove dangerous content, and suggests the operation is both ad hoc and capricious.

Not mentioned in the latest article are the requests by Jewish groups that Facebook provide more transparency on its policies in order to curb anti-Semitism and hate speech.When Facebook released a fuller version of its Community Standards in April, the Anti-Defamation League encouraged the company “to explain how hate content spreads on the platform, and how their policies are enforced in ways consistent with both internal standards and with the ethical standards of civil society.”

The latest probe suggests Facebook is still struggling to enforce those standards.

Facebook’s guidelines for moderators are ‘byzantine.’

According to the more than 1,400 pages from the standards rulebooks obtained by The Times, the guidelines for Facebook’s global army of moderators are “byzantine,” “baffling” and “head-spinning.”

“Moderators must sort a post into one of three ‘tiers’ of severity,” The Times reports. “They must bear in mind lists like the six ‘designated dehumanizing comparisons,’ among them comparing Jews to rats.”

Standards shift according to the political winds.

Facebook is trying to crack down on the far right, banning the Proud Boys, a far-right American group, and even blocking an inflammatory ad “about a caravan of Central American migrants that was produced by President Trump’s political team.” Bu the decisions can be capricious, based on knowledge (and sometimes ignorance) of local standards in the multitude of countries in which Facebook operates.

“In June, according to internal emails reviewed by The Times, moderators were told to allow users to praise the Taliban — normally a forbidden practice — if they mentioned its decision to enter into a cease-fire. In another email, moderators were told to hunt down and remove rumors wrongly accusing an Israeli soldier of killing a Palestinian medic,” according to The Times.

The reporter, Max Fisher, adds: “The company never set out to play this role, but in an effort to control problems of its own creation, it has quietly become, with a speed that makes even employees uncomfortable, what is arguably one of the world’s most powerful political regulators.”

Facebook relies on local pressure to root out hate.

Facebook prohibits users from supporting or praising hate groups. But while it keeps an internal list of groups and individuals it designates as hate figures, the methodology can be inconsistent, say experts on extremism. Government pressure, or inaction, often plays a role.

“The company bans an impressive array of American and British groups … but relatively few in countries where the far right can be more violent, particularly Russia or Ukraine,” The Times Reports.

Germany, which takes hate speech seriously, has seen dozens of far-right groups in the country blocked; Austria, less stringent, only one.

Provocation is baked into the algorithm.

Facebook’s business model is at odds with its goals to be a good corporate citizen.

“The platform relies on an algorithm that tends to promote the most provocative content, sometimes of the sort the company says it wants to suppress,” according to The Times.

There are no easy answers.

Facebook seems to be paying a price for being something between a public utility and a vehicle for selling advertising. How to fix its monitoring of hate speech and dangerous content when local standards can be obtuse and local governments repressive?

Suggestions range from some sort of partnership arrangement with local governments (inviting a host of other censorship and incitement problems), to deferring more decisions to moderators with a better understanding of local culture and politics, to tweaking its own algorithms intended to maximize views and audience.

In a long note posted Friday December 28, 2018, on, where else, Facebook, Zuckerberg explains the steps the company is taking to preventing election interference, insuring users’ privacy and stopping the spread of harmful content.

“We’ve built AI systems to automatically identify and remove content related to terrorism, hate speech, and more before anyone even sees it. These systems take down 99% of the terrorist-related content we remove before anyone even reports it, for example,” Zuckerberg writes. “We’ve improved News Feed to promote news from trusted sources. We’re developing systems to automatically reduce the distribution of borderline content, including sensationalism and misinformation. We’ve tripled the size of our content review team to handle more complex cases that AI can’t judge. We’ve built an appeals system for when we get decisions wrong. We’re working to establish an independent body that people can appeal decisions to and that will help decide our policies. We’ve begun issuing transparency reports on our effectiveness in removing harmful content. And we’ve also started working with governments, like in France, to establish effective content regulations for internet platforms.”

Be the first to comment on "How Facebook monitors harmful content: 5 takeaways"

Leave a comment

Your email address will not be published.


*