Meta removes over 6 lakh accounts, including 1.35 lakh on Instagram. Here's why
The enforcement action includes 135,000 accounts flagged for posting sexualised comments or soliciting explicit images, often aimed at children or child-run accounts.

- Jul 24, 2025,
- Updated Jul 24, 2025 2:23 PM IST
Meta has taken down more than 600,000 accounts across Instagram and Facebook as part of a sweeping crackdown on predatory and exploitative behaviour targeting minors. The enforcement action includes 135,000 accounts flagged for posting sexualised comments or soliciting explicit images, often aimed at children or child-run accounts.
An additional 500,000 accounts were removed for inappropriate interactions or links to those already flagged. Meta says it is sharing this data with other tech platforms through the Tech Coalition’s Lantern programme to help prevent cross-platform abuse.
The move comes amid growing scrutiny of how social media platforms protect young users, and as Meta faces mounting legal pressure over the mental health impact of its services on children and teens.
Stronger Safeguards for Teen Accounts
In parallel with the mass removals, Meta has rolled out a series of safety updates focused on shielding teenagers from unwanted contact. These include clearer context about who is messaging them, and new one-tap options to block and report suspicious users.
Teen users now receive “Safety Notice” alerts if someone they don’t follow sends a message, prompting them to think twice before engaging. According to internal data, these nudges are having a visible impact: in June alone, teens blocked over one million accounts and filed an equal number of reports after seeing such alerts.
A new “Location Notice” will also alert users if someone they’re speaking to may be located in another country, a tactic often used by scammers to obscure their identity.
AI-Driven Age Detection and Nudity Filters
To catch underage users who sign up with false information, Meta is increasingly relying on artificial intelligence to detect age misrepresentation. If an account is flagged, it is automatically converted into a teen profile with stricter safety settings.
Teen accounts are private by default and limit who can contact them via direct messages. Meta’s nudity protection tool, which blurs suspected explicit images, is also now enabled by default for all teen users. The company says 99% have left it on, and nearly half chose not to forward nudity after receiving a warning.
Extending Protections to Child-Focused Accounts
The updated safety measures are also being extended to adult-managed accounts that feature children, such as those run by parents or talent representatives. These accounts will now have default restrictions on who can message them and will benefit from stronger filters to block offensive comments. Adults previously flagged for inappropriate behaviour will have limited ability to find or interact with these profiles.
A Wider Push Amid Rising Pressure
These efforts form part of Meta’s broader response to increased public and regulatory scrutiny. In 2024, the company made all new teen accounts private by default and has been building on those changes since.
At the same time, Meta has stepped up its fight against impersonation and spam. This year alone, it has taken down around 10 million fake accounts that were pretending to be popular content creators.
While it’s unclear whether these latest steps will fully address concerns around child safety online, they mark a more aggressive shift in Meta’s approach. As pressure from lawmakers builds, including renewed debate around the proposed Kids Online Safety Act in the US, the company appears to be signalling that it is finally taking the issue more seriously.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Meta has taken down more than 600,000 accounts across Instagram and Facebook as part of a sweeping crackdown on predatory and exploitative behaviour targeting minors. The enforcement action includes 135,000 accounts flagged for posting sexualised comments or soliciting explicit images, often aimed at children or child-run accounts.
An additional 500,000 accounts were removed for inappropriate interactions or links to those already flagged. Meta says it is sharing this data with other tech platforms through the Tech Coalition’s Lantern programme to help prevent cross-platform abuse.
The move comes amid growing scrutiny of how social media platforms protect young users, and as Meta faces mounting legal pressure over the mental health impact of its services on children and teens.
Stronger Safeguards for Teen Accounts
In parallel with the mass removals, Meta has rolled out a series of safety updates focused on shielding teenagers from unwanted contact. These include clearer context about who is messaging them, and new one-tap options to block and report suspicious users.
Teen users now receive “Safety Notice” alerts if someone they don’t follow sends a message, prompting them to think twice before engaging. According to internal data, these nudges are having a visible impact: in June alone, teens blocked over one million accounts and filed an equal number of reports after seeing such alerts.
A new “Location Notice” will also alert users if someone they’re speaking to may be located in another country, a tactic often used by scammers to obscure their identity.
AI-Driven Age Detection and Nudity Filters
To catch underage users who sign up with false information, Meta is increasingly relying on artificial intelligence to detect age misrepresentation. If an account is flagged, it is automatically converted into a teen profile with stricter safety settings.
Teen accounts are private by default and limit who can contact them via direct messages. Meta’s nudity protection tool, which blurs suspected explicit images, is also now enabled by default for all teen users. The company says 99% have left it on, and nearly half chose not to forward nudity after receiving a warning.
Extending Protections to Child-Focused Accounts
The updated safety measures are also being extended to adult-managed accounts that feature children, such as those run by parents or talent representatives. These accounts will now have default restrictions on who can message them and will benefit from stronger filters to block offensive comments. Adults previously flagged for inappropriate behaviour will have limited ability to find or interact with these profiles.
A Wider Push Amid Rising Pressure
These efforts form part of Meta’s broader response to increased public and regulatory scrutiny. In 2024, the company made all new teen accounts private by default and has been building on those changes since.
At the same time, Meta has stepped up its fight against impersonation and spam. This year alone, it has taken down around 10 million fake accounts that were pretending to be popular content creators.
While it’s unclear whether these latest steps will fully address concerns around child safety online, they mark a more aggressive shift in Meta’s approach. As pressure from lawmakers builds, including renewed debate around the proposed Kids Online Safety Act in the US, the company appears to be signalling that it is finally taking the issue more seriously.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
