Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

  • Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

The company estimates that between 0.22 percent and 0.27 percent of content violated Facebook's standards for graphic violence in the first quarter of 2018.

Facebook took action on 1.9 million pieces of content over terrorist propaganda.

The company said most of the increase was the result of improvements in detection technology. Facebook said users were more aggressively posting images of violence in places like war-torn Syria.

This was up from 0.16-0.19% in the previous three months.

The prevalence of graphic violence was higher and received 22 to 27 views-an increase from the previous quarter that suggests more Facebook users are sharing violent content on the platform, the company said.

Nearly 86 per cent was found by the firm's technology before it was reported by users.

Facebook pulled or slapped warnings on almost 30 million posts containing sexual or violent images, terrorist propaganda or hate speech during the first quarter.

The first of what will be quarterly reports on standards enforcement should be as notable to investors as the company's quarterly earnings reports.

That's up from 0.06-0.08% during the last three months of 2017.

The report published today spans from October 2017 to March 2018, with a breakdown comparing how much content the company took action on in various categories in Q4 2017 and Q1 2018. During Q1, the social network flagged 96 percent of all nudity before users reported it.

Facebook says the number of views of terrorist propaganda from organisations including ISIS, al-Qaeda and their affiliates that happen on the platform is extremely low.

Though Facebook extolled its forcefulness in removing content, the average user may not notice any change. The report also doesn't cover how much inappropriate content Facebook missed.

This led to old as well as new content of this type being taken down.

However, it declined to say how many minors - legal users who are between the ages of 13 and 17 - saw the offending content.

"These kinds of metrics can help our teams understand what's actually happening to 2-plus billion people", he said.

Facebook removed 837 million spam posts, disabled 583 million fake accounts and removed 21 million pieces of porn or adult nudity that violated its community standards in the first quarter of 2018. Overall, the social giant estimated that around 3%-4% of active Facebook accounts on the site during Q1 were still fake.

These releases come in the wake of the Cambridge Analytica scandal, which has left the company battling to restore its reputation with users and developers - though employees have said the decision to release the Community Standards was not driven by recent events.

He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important. And more generally, as I explained last week, technology needs large amounts of training data to recognise meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported.

It also means content in private groups, which will never be reported by members of the group, can be flagged and dealt with. It says it found and flagged almost 100% of spam content in both Q1 and Q4.