Facebook closed 583m fake accounts in first three months of 2018


Facebook closed 583m fake accounts in first three months of 2018

Media playback is unsupported on your device                  Media captionWATCH Facebook reveals scale of abuse

Media playback is unsupported on your device Media captionWATCH Facebook reveals scale of abuse

Fake accounts have gotten more attention in recent months after it was revealed that Russian agents used them to buy ads to try to influence the 2016 elections.

According to the numbers, covering the six-month period from October 2017 to March 2018, Facebook's automated systems remove millions of pieces of spam, pornography, graphic violence and fake accounts quickly - but that hate-speech content, including terrorist propaganda, still requires extensive manual review to identify.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change.

The number of pieces of nude and sexual content that the company took action on during the period was 21 million, the same as during the final quarter of past year.

While Facebook uses what it calls "detection technology" to root out offending posts and profiles, the software has difficulty detecting hate speech.

Several categories of violating content outlined in Facebook's moderation guidelines - including child sexual exploitation imagery, revenge porn, credible violence, suicidal posts, bullying, harassment, privacy breaches and copyright infringement - are not included in the report.

The prevalence of graphic violence was higher and received 22 to 27 views-an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said.

The problem is that, as Facebook's VP of product management Guy Rosen wrote in the blog post announcing today's report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.

Most of the content was found and flagged before users had a chance to spot it and alert the platform. "This increase is mostly due to improvements in our detection technology", the report notes.

The company previously enforced community standards by having users report violations and trained staff then deal with them. Which is to say, this doesn't mean that 0.22% of the content posted on Facebook contained graphic violence; just that the graphic content posted accounted for 0.22% of total views.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.

The release of the report-the first time the company has ever made such data public-comes on the heels of a series of other first-ever efforts at transparency following the Cambridge Analytica scandal, Facebook's subsequent apologies, and Mark Zuckerberg's many hours of testimony on Capitol Hill.

"We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards", the report says.

He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important. During Q1, the social network flagged 96 percent of all nudity before users reported it. It says it found and flagged almost 100% of spam content in both Q1 and Q4.

Facebook also said it removed 583 million fake accounts in the same period, or the equivalent of 3 to 4 percent of its monthly users. In this case, 86% was flagged by its technology.