Facebook released its Community Standards Enforcement Preliminary Report on Tuesday, providing a look at the social network's methods for tracking content that violates its standards, how it responds to those violations, and how much content the company has recently removed.
Facebook came under intense scrutiny earlier this year over the use of private data and the impact of unregulated content on its community of 2.2 billion monthly users, with governments around the world questioning the Menlo Park, Calif. -based company's policies.
The social media company targeted accounts which produced inappropriate content from several areas including graphic violence, terrorist propaganda, and hate speech.
Most of the 583 million fake accounts Facebook disabled in Q1 were disabled "within minutes of registration". Current estimates by the firm suggest 3-4% of active Facebook accounts on the site between October 2017 and March were fake.
Facebook's vice president of product management, Guy Rosen, said in a blog post Tuesday about the newly-released report that nearly all of the 837 million spam posts Facebook took down in the first quarter of 2018 were found by Facebook before anyone had reported them.
In the first quarter, the company took down 837 million pieces of spam, almost 100 percent of which was found and flagged before anyone reported it. Furthermore, 2.5 million pieces of hate speech were removed although Rosen concedes that Facebook's technology still has some work to do in this category as only 38 percent was flagged automatically.More news: Oldest living American WWII veteran turns 112
On Tuesday, May 15, Guy Rosen, Facebook's Vice President of Product Management, posted a blog post on the company's newsroom.
Facing regulatory pressures from Congress over its role in the Cambridge Analytica data scandal, Facebook says it will double its safety and security team to 20,000 this year. For instance, nudity only received seven to nine views for every 10,000 content views.
Facebook acknowledged it has work to do when it comes to properly removing hate speech.
He said technology like artificial intelligence is still years from effectively detecting most bad content because context is so important.
In the majority of cases, Facebook's automated systems actually did a pretty good job of both detecting and flagging content before users could even get the chance to report it.
If a Facebook user makes a post speaking about their experience being called a slur in public, using the word in order to make a greater impact, does their post constitute hate speech? We believe that increased transparency tends to lead to increased accountability and responsibility over time and publishing this information will push us to improve more quickly too. The report provided detailed data on just how much objectionable content CEO Mark Zuckerberg's famed social network had to moderate in recent months.