Banned but booming: Apple, Google still show ‘nudify’ apps in search results
At a time when governments worldwide are tightening scrutiny of deepfake tools and AI misuse, app stores from Google and Apple are still showing nudify apps.

- Apr 16, 2026,
- Updated Apr 16, 2026 3:55 PM IST
Apps that use artificial intelligence to generate non-consensual intimate images may be banned on paper, but they remain a simple search away on Apple’s App Store and Google Play, according to a report by the Tech Transparency Project (TTP).
The findings highlight a growing gap between platform policies and actual enforcement at a time when governments worldwide are tightening scrutiny of deepfake tools and AI misuse.
Search results raise red flags
The report found that common search terms such as “nudify,” “undress,” and “deepnude” continue to surface apps capable of digitally altering images of women.
“Roughly 40% of the apps that came up in both the Apple and Google Play Store search results could render women nude or scantily clad,” TTP said.
In total, 46 apps were identified in Apple App Store searches, of which 18 offered nudifying features. On Google Play, 49 apps appeared in similar searches, including 20 with similar capabilities.
The report also flagged that app stores were running ads for such apps in certain search results, raising further questions about platform oversight.
Child safety concerns
A key concern flagged by the report is accessibility. Many of these apps were rated “E”, meaning they are considered suitable for everyone, including children.
The scale of usage is also significant. According to TTP, nudify apps have collectively been downloaded about 483 million times and generated more than $122 million in revenue.
Policy vs enforcement gap
Both Apple and Google maintain strict policies against sexually explicit or exploitative content.
Apple’s App Review Guidelines prohibit “offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy, overtly sexual or pornographic material.”
Similarly, Google says that “We don't allow apps that contain or promote sexual content or profanity, including pornography, or any content or services intended to be sexually gratifying.”
However, Google allows limited exceptions, noting that “content that contains nudity may be allowed if the primary purpose is educational, documentary, scientific or artistic, and is not gratuitous.”
Platforms respond, removals begin
In response to the report, Google said it had already taken action on several flagged apps.
A Google spokesperson, Dan Jackson told TTP, many of the apps had been suspended and that the company was “continuing its investigation and enforcement actions.”
On age ratings, Jackson added that the International Age Rating Coalition determines classifications for apps on Google Play.
Apple, meanwhile, has reportedly removed 15 apps following the report, while Google has taken down seven.
AI tools widen the problem
The issue extends beyond standalone apps to AI chatbots and integrated tools.
The report highlighted an app called “Uncensored AI — No Filter Chat,” which appeared on the App Store as a general AI chat and photo editing tool. When tested, it generated manipulated images when prompted, while stating that data “may be processed by xAI.”
According to TTP, the developer, Tokyo-based Masaki Matsushita, said the app used xAI’s Grok model for image generation and that they were unaware it could produce such extreme outputs.
The developer has since tightened moderation, rebranded the app to “Chat AI - Simple AI,” and increased its age rating from 16+ to 18+.
The findings come amid recent scrutiny of xAI’s Grok after reports that the chatbot could generate highly sexualised AI videos, despite claims of improved safeguards.
A bigger question for Big Tech
The report ultimately raises a broader issue: how such apps continue to slip through review systems despite explicit rules against non-consensual sexual content.
As AI tools become more powerful and easier to deploy, platforms face increasing pressure to move faster on detection and enforcement — especially when misuse can scale rapidly and impact vulnerable users.
Business Today has reached out to Apple and Google for comment. The story will be updated once responses are received.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Apps that use artificial intelligence to generate non-consensual intimate images may be banned on paper, but they remain a simple search away on Apple’s App Store and Google Play, according to a report by the Tech Transparency Project (TTP).
The findings highlight a growing gap between platform policies and actual enforcement at a time when governments worldwide are tightening scrutiny of deepfake tools and AI misuse.
Search results raise red flags
The report found that common search terms such as “nudify,” “undress,” and “deepnude” continue to surface apps capable of digitally altering images of women.
“Roughly 40% of the apps that came up in both the Apple and Google Play Store search results could render women nude or scantily clad,” TTP said.
In total, 46 apps were identified in Apple App Store searches, of which 18 offered nudifying features. On Google Play, 49 apps appeared in similar searches, including 20 with similar capabilities.
The report also flagged that app stores were running ads for such apps in certain search results, raising further questions about platform oversight.
Child safety concerns
A key concern flagged by the report is accessibility. Many of these apps were rated “E”, meaning they are considered suitable for everyone, including children.
The scale of usage is also significant. According to TTP, nudify apps have collectively been downloaded about 483 million times and generated more than $122 million in revenue.
Policy vs enforcement gap
Both Apple and Google maintain strict policies against sexually explicit or exploitative content.
Apple’s App Review Guidelines prohibit “offensive, insensitive, upsetting, intended to disgust, in exceptionally poor taste, or just plain creepy, overtly sexual or pornographic material.”
Similarly, Google says that “We don't allow apps that contain or promote sexual content or profanity, including pornography, or any content or services intended to be sexually gratifying.”
However, Google allows limited exceptions, noting that “content that contains nudity may be allowed if the primary purpose is educational, documentary, scientific or artistic, and is not gratuitous.”
Platforms respond, removals begin
In response to the report, Google said it had already taken action on several flagged apps.
A Google spokesperson, Dan Jackson told TTP, many of the apps had been suspended and that the company was “continuing its investigation and enforcement actions.”
On age ratings, Jackson added that the International Age Rating Coalition determines classifications for apps on Google Play.
Apple, meanwhile, has reportedly removed 15 apps following the report, while Google has taken down seven.
AI tools widen the problem
The issue extends beyond standalone apps to AI chatbots and integrated tools.
The report highlighted an app called “Uncensored AI — No Filter Chat,” which appeared on the App Store as a general AI chat and photo editing tool. When tested, it generated manipulated images when prompted, while stating that data “may be processed by xAI.”
According to TTP, the developer, Tokyo-based Masaki Matsushita, said the app used xAI’s Grok model for image generation and that they were unaware it could produce such extreme outputs.
The developer has since tightened moderation, rebranded the app to “Chat AI - Simple AI,” and increased its age rating from 16+ to 18+.
The findings come amid recent scrutiny of xAI’s Grok after reports that the chatbot could generate highly sexualised AI videos, despite claims of improved safeguards.
A bigger question for Big Tech
The report ultimately raises a broader issue: how such apps continue to slip through review systems despite explicit rules against non-consensual sexual content.
As AI tools become more powerful and easier to deploy, platforms face increasing pressure to move faster on detection and enforcement — especially when misuse can scale rapidly and impact vulnerable users.
Business Today has reached out to Apple and Google for comment. The story will be updated once responses are received.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
