World

Facebook has already removed 583 million fake accounts this year

Facebook has already removed 583 million fake accounts this year

The company also enforced its guidelines on 21 million pieces of nudity and sexual activity, 1.9 million pieces of terrorist propaganda, 2.5 million hate speech examples, and 3.5 million posts with violent content. Facebook's automated systems spotted nearly 100 percent of spam and terrorist propaganda while 99 percent of fake accounts were also spotted by them.

"For hate speech, our technology still doesn't work that well and so it needs to be checked by our review teams", said Guy Rosen, the company's vice-president of product management, in a statement posted online announcing the release of the report.

But how many content violations actually happen within Facebook? A Bloomberg report last week showed that while Facebook says it's become effective at taking down terrorist content from al-Qaeda and the Islamic State, recruitment posts for other US -designated terrorist groups are found easily on the site.

Of course, the authors note, while such AI systems are promising, it will take years before they are effective at removing all objectionable content.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.

Now, however, artificial intelligence technology does much of that work.




Over the previous year, the company has repeatedly touted its plans to expand its team of reviewers from 10,000 to 20,000. Instead, Facebook's approach is to have bigger groups residing in "centres of excellence" in order to review the content on its platform, he explained. The company credited better detection, even as it said computer programs have trouble understanding context and tone of language.

In the area of adult nudity and sexual activity, between 0.07-0.09% of views during the first quarter were of content that violated standards. While the company still asks people to report offensive content, it has increasingly used AI to weed out offensive posts before anyone sees them.

The numbers show that Facebook is still predominately relying on other people to catch hate speech - which CEO Mark Zuckerberg has spoken about before, saying that it's much harder to build an AI system that can determine what hate speech is then to build a system that can detect a nipple.

Spam: Facebook says it took action on 837 million pieces of spam content in Q1, up 15% from 727 million in Q4. However, the company estimates that between 3% and 4% of the active accounts on its service are fake.

"Whenever a war starts, there's a big spike in graphic violence", Schultz told reporters at Facebook's headquarters.

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to manipulate Facebook users in the USA and elsewhere.