Business Today
Loading...

Facebook removes 1.5 billion fake accounts, child nudity posts and more

The social media giant revealed in a blog post that it had removed graphic content primarily comprising of nudity and hate speech from Instagram and Facebook.

twitter-logoAnkita Chakravarti | August 12, 2020 | Updated 14:36 IST

Highlights

  • Facebook on Tuesday shared a report about how the content on the platform was moderated in the second quarter of 2020.
  • Facebook evealed in a blog post that it had removed graphic content rom Instagram and Facebook.
  • Facebook removed more than 3.3 million objectionable content in the second quarter as compared to the 808,900 in the first quarter of the year.

Facebook on Tuesday shared a report about how the content on the platform was moderated in the second quarter of 2020. The social media giant revealed in a blog post that it had removed graphic content primarily comprising of nudity and hate speech from Instagram and Facebook.

Facebook also said in the blog that due to the coronavirus pandemic most of their content reviewers were sent home so they had to rely more on technology to manually remove despicable content from its platforms. Sharing the exact figures, Facebook said that it took action on 22.5 million content in the second quarter as compared to the 9.6 million in Q1. Similarly, on Instagram, Facebook removed more than 3.3 million objectionable content in the second quarter as compared to the 808,900 in the first quarter of the year.

"Our proactive detection rate for hate speech on Facebook increased 6 points from 89% to 95%. In turn, the amount of content we took action on increased from 9.6 million in Q1 to 22.5 million in Q2. This is because we expanded some of our automation technology in Spanish, Arabic and Indonesian and made improvements to our English detection technology in Q1. In Q2, improvements to our automation capabilities helped us take action on more content in English, Spanish and Burmese. On Instagram, our proactive detection rate for hate speech increased 39 points from 45% to 84% and the amount of content we took action on increased from 808,900 in Q1 2020 to 3.3 million in Q2. These increases were driven by expanding our proactive detection technologies in English and Spanish," Guy Rosen, VP Integrity, Facebook said in a blog.

The social media giant said that their mechanism to review content through technology is rapidly improving but there are certain areas where the technology fails to perform and the need for content reviewers arises. Citing an example, Facebook said that for reviewing content related to suicide and self-injury and child exploitative, they require people to manually remove the content as technology can only find and remove identical or near-identical content. The company revealed that due to fewer content reviewers they couldn't take as much action on graphic content as they would on other scenarios.

Facebook also shared they were able to successfully remove terrorism content from their social media platforms. "On Facebook, the amount of content we took action on increased from 6.3 million in Q1 to 8.7 million in Q2. And thanks to both improvements in our technology and the return of some content reviewers, we saw increases in the amount of content we took action on connected to organized hate on Instagram and bullying and harassment on both Facebook and Instagram," the report revealed.

The California-based company has plans to add more ways to review content on their platforms and they are also improving their content reviewing tech to rope in more reviewers online.

Youtube
  • Print

  • COMMENT
Tags: Facebook
BT-Story-Page-B.gif
A    A   A
close