Meta oversight board examining company's response to deepfake of Indian actress on Instagram

Meta oversight board examining company's response to deepfake of Indian actress on Instagram

AI technology advancements have heightened concerns about sexually explicit fakes, often targeting women and girls. These are becoming increasingly difficult to distinguish from authentic content

Advertisement
MetaMeta
Danny D'Cruze
  • Apr 17, 2024,
  • Updated Apr 17, 2024 9:24 AM IST

Meta Platforms' independent Oversight Board is currently examining the company's response to two AI-generated sexually explicit images of female celebrities shared on Facebook and Instagram. These images, which were not named to avoid causing further harm, serve as case studies for the board to evaluate Meta's policies and enforcement strategies regarding pornographic deepfakes. 

Advertisement

Related Articles

AI technology advancements have heightened concerns about sexually explicit fakes, often targeting women and girls. These are becoming increasingly difficult to distinguish from authentic content. The issue gained prominent attention when the Elon Musk-owned platform X blocked all images of Taylor Swift following a surge in fake explicit content of the pop star. In India, multiple actresses, actors and even sportsmen have fallen victim to deepfakes. 

Key points from the Oversight Board's current review include:

Nature of the Images: One image shared on Instagram depicted a nude woman resembling a public figure from India. The other, posted in a Facebook group, showed a nude woman resembling an American public figure in a sexually compromising pose.Meta's Actions: The image of the American woman was removed for violating Meta's harassment policy, while the image of the Indian woman remained until the board intervened.Future Commitments: Meta has stated its commitment to adhere to the Oversight Board's decisions regarding these cases. The ongoing problem of deepfakes has sparked calls for legislation to criminalise the creation of harmful deepfakes and for tech companies to actively prevent such misuse of their platforms.

Advertisement

With agency inputs

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Meta Platforms' independent Oversight Board is currently examining the company's response to two AI-generated sexually explicit images of female celebrities shared on Facebook and Instagram. These images, which were not named to avoid causing further harm, serve as case studies for the board to evaluate Meta's policies and enforcement strategies regarding pornographic deepfakes. 

Advertisement

Related Articles

AI technology advancements have heightened concerns about sexually explicit fakes, often targeting women and girls. These are becoming increasingly difficult to distinguish from authentic content. The issue gained prominent attention when the Elon Musk-owned platform X blocked all images of Taylor Swift following a surge in fake explicit content of the pop star. In India, multiple actresses, actors and even sportsmen have fallen victim to deepfakes. 

Key points from the Oversight Board's current review include:

Nature of the Images: One image shared on Instagram depicted a nude woman resembling a public figure from India. The other, posted in a Facebook group, showed a nude woman resembling an American public figure in a sexually compromising pose.Meta's Actions: The image of the American woman was removed for violating Meta's harassment policy, while the image of the Indian woman remained until the board intervened.Future Commitments: Meta has stated its commitment to adhere to the Oversight Board's decisions regarding these cases. The ongoing problem of deepfakes has sparked calls for legislation to criminalise the creation of harmful deepfakes and for tech companies to actively prevent such misuse of their platforms.

Advertisement

With agency inputs

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement