Meta replaces 90% of human roles with AI in this department

Meta replaces 90% of human roles with AI in this department

Meta is reportedly preparing to automate up to 90% of internal risk assessments using AI.

Advertisement
MetaMeta
Business Today Desk
  • Jun 2, 2025,
  • Updated Jun 2, 2025 4:56 PM IST

Meta is preparing to overhaul how it evaluates the risks of new features and product updates across its platforms by automating up to 90% of internal risk assessments using artificial intelligence. This shift, revealed through internal documents obtained by NPR, marks a significant departure from the company’s decade-long reliance on human-led “privacy and integrity reviews.”

Advertisement

These reviews historically played a crucial role in assessing whether updates might compromise user privacy, harm minors, or enable the spread of misinformation and toxic content. Under the new system, product teams will complete a questionnaire about their project and receive AI-generated feedback almost instantly. The system will either approve the update or outline requirements that must be met before launch, tasks the teams will then self-verify.

Meta argues the move will speed up development timelines and allow engineers to focus on innovation, claiming that human expertise will still be used for “novel and complex issues,” while only low-risk decisions are being automated. The company says this change frees up capacity for its human reviewers to concentrate on content that is more likely to violate its policies.

Advertisement

However, internal documents and employee testimonies suggest that even sensitive areas, including AI safety, youth protection, and violent content moderation, could be handled by AI systems. Some insiders are deeply concerned that reducing human scrutiny may increase the risk of real-world harm.

“Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” said a former Meta executive. Another unnamed employee warned: “We provide the human perspective of how things can go wrong. That’s being lost.”

Meta maintains that it audits AI decisions and has made exceptions for its European operations, where stricter oversight is required under the EU’s Digital Services Act. An internal memo reportedly confirms that oversight of products and user data for EU users will continue to be led by Meta’s European headquarters in Ireland.

Advertisement

The shift towards automation is part of a broader AI transformation underway at Meta. CEO Mark Zuckerberg recently stated that the company’s AI agents will soon write most of Meta’s code, including the Llama models, and are already capable of debugging and outperforming average developers. Meta is also building specialised internal AI agents to accelerate research and product development.

This move mirrors a wider industry trend, with Google CEO Sundar Pichai claiming 30% of the company’s code is now AI-generated, and OpenAI’s Sam Altman suggesting that in some firms, the figure is already 50%.

Yet, the timing of Meta’s changes raises eyebrows. They come soon after the company ended its fact-checking programme and relaxed its hate speech rules. Critics say the company is dismantling long-standing guardrails in favour of fewer restrictions and faster updates, potentially at the cost of user safety.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Meta is preparing to overhaul how it evaluates the risks of new features and product updates across its platforms by automating up to 90% of internal risk assessments using artificial intelligence. This shift, revealed through internal documents obtained by NPR, marks a significant departure from the company’s decade-long reliance on human-led “privacy and integrity reviews.”

Advertisement

These reviews historically played a crucial role in assessing whether updates might compromise user privacy, harm minors, or enable the spread of misinformation and toxic content. Under the new system, product teams will complete a questionnaire about their project and receive AI-generated feedback almost instantly. The system will either approve the update or outline requirements that must be met before launch, tasks the teams will then self-verify.

Meta argues the move will speed up development timelines and allow engineers to focus on innovation, claiming that human expertise will still be used for “novel and complex issues,” while only low-risk decisions are being automated. The company says this change frees up capacity for its human reviewers to concentrate on content that is more likely to violate its policies.

Advertisement

However, internal documents and employee testimonies suggest that even sensitive areas, including AI safety, youth protection, and violent content moderation, could be handled by AI systems. Some insiders are deeply concerned that reducing human scrutiny may increase the risk of real-world harm.

“Insofar as this process functionally means more stuff launching faster, with less rigorous scrutiny and opposition, it means you’re creating higher risks,” said a former Meta executive. Another unnamed employee warned: “We provide the human perspective of how things can go wrong. That’s being lost.”

Meta maintains that it audits AI decisions and has made exceptions for its European operations, where stricter oversight is required under the EU’s Digital Services Act. An internal memo reportedly confirms that oversight of products and user data for EU users will continue to be led by Meta’s European headquarters in Ireland.

Advertisement

The shift towards automation is part of a broader AI transformation underway at Meta. CEO Mark Zuckerberg recently stated that the company’s AI agents will soon write most of Meta’s code, including the Llama models, and are already capable of debugging and outperforming average developers. Meta is also building specialised internal AI agents to accelerate research and product development.

This move mirrors a wider industry trend, with Google CEO Sundar Pichai claiming 30% of the company’s code is now AI-generated, and OpenAI’s Sam Altman suggesting that in some firms, the figure is already 50%.

Yet, the timing of Meta’s changes raises eyebrows. They come soon after the company ended its fact-checking programme and relaxed its hate speech rules. Critics say the company is dismantling long-standing guardrails in favour of fewer restrictions and faster updates, potentially at the cost of user safety.

For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine

Read more!
Advertisement