Facebook is slipping after announcing it disabled more 583 million fake accounts

Adjust Comment Print

Facebook today published its first-ever Community Standards Enforcement Report, detailing what kind of action it took on content displaying graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, and spam.

Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech in the first three months of 2018, the social media giant said Tuesday.

It's a major effort towards transparency from Facebook in the wake of the Cambridge Analytica scandal.

Facebook removed 21 million pieces of adult nudity and sexual activity in Q1 2018, 96% of which was found and flagged by its technology before it was reported.

Responses to rule violations include removing content, adding warnings to content that may be disturbing to some users while not violating Facebook standards; and notifying law enforcement in case of a "specific, imminent and credible threat to human life".

The new disclosures come after Facebook published for the first time its full "Community Standards": The internal rules it uses to determine exactly what is and isn't allowed on the social network.

By comparison, the company was first to spot more than 85 percent of the graphically violent content it took action on, and nearly 96 percent of the nudity and sexual content.

Now, however, artificial intelligence technology does much of that work. For example, with terrorist propaganda, Facebook says its increased removal rate is due to improvements in photo detection technology that can spot both old and newly posted content. The company found and flagged 95.8% of such content before users reported it.

Hate speech: In Q1, the company took action on 2.5 million pieces of such content, up about 56% from 1.6 million during Q4.

"Hate speech content often requires detailed scrutiny by our trained reviewers to understand context and decide whether the material violates standards, so we tend to find and flag less of it, and rely more on user reports, than with some other violation types", the report says.

Throughout the report, Facebook shares how the most recent quarter's numbers compare to those of the quarter before it, and where there are significant changes, it notes why that might be the case. It also purged 583 million fake accounts, "most of which were disabled within minutes of registration".

Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4.

The company estimates that 3% to 4% of its monthly active users are "fake", up from 2% to 3% in Q3 of 2017, according to filings.

The company took down 837 million pieces of spam in Q1 2018, almost all of which was flagged before any users reported it. But only 38 percent had been detected through Facebook's efforts - the rest flagged up by users. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them".

Comments