Grok AI
Grok AIElon Musk’s artificial intelligence chatbot, Grok, has provoked a major privacy scandal after reports emerged that it is distributing people’s home addresses and sensitive personal data with only a little prompting. The AI assistant, developed by Musk’s firm xAI and integrated into the X platform, is now facing accusations that it is dangerously unguarded compared to rival models.
The investigation conducted by the technology news outlet Futurism revealed the chatbot’s alarming capability to dox almost anyone, making it effortlessly easy to uncover personal information that should remain private. The report claims Grok is disclosing sensitive details about ordinary citizens, not just high-profile figures, offering what it confidently describes as their current residential addresses, contact information, and even family details.
During testing, researchers entered simple prompts such as a name followed by the word ‘address’ into Grok’s free web version. The AI successfully provided the correct residential address for Barstool Sports founder Dave Portnoy in one instance. More concerningly, it repeated this behaviour for non-public figures: out of 33 random names tested by Futurism, the chatbot returned ten correct and current home addresses, alongside seven that were previously accurate but outdated, and four workplace addresses.
The disclosure of such sensitive data raises serious privacy and ethical concerns, particularly regarding the potential for the AI system to be weaponised for stalking, harassment, or identity theft. Worryingly, testers found that even when they only requested an address, Grok often generated entire dossiers, including phone numbers, email IDs, and information about family members, offering this data with virtually no resistance or ethical disclaimer.
Experts suggest that Grok may be drawing information from data brokers and people-search databases. While these sources operate in a “grey zone” and are technically public, most individuals are unaware their personal details are circulating online. This incident underscores a growing regulatory gap, highlighting that while most leading AI firms have robust privacy filters in place, the xAI assistant appears to have failed to implement necessary safeguards, leaving its users vulnerable to breaches. The ease of access fundamentally redefines the misuse risks associated with generative AI systems.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine