Meta AI glasses is misusing user data to train AI systems.
Meta AI glasses is misusing user data to train AI systems.Meta’s Ray-Ban AI glasses are marketed as an always-ready assistant for capturing everyday moments, from travel clips to hands-free photos and videos. But media reports suggest that some of the footage recorded by users may be reviewed by human contractors as part of the company’s AI training process.
According to investigations by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, thousands of data annotators working for Meta subcontractor Sama in Nairobi, Kenya, are tasked with reviewing videos captured by the smart glasses. The footage is reportedly used to improve Meta’s AI systems and the experience of its wearable devices.
The investigation suggests that some of the clips reviewed by contractors include deeply private moments captured unintentionally by users.
“In some videos, you can see someone going to the toilet, or getting undressed,” one Sama employee told the publications. “I don’t think they know, because if they knew, they wouldn’t be recording.”
Another worker described the extent of what reviewers can see while labelling content for AI training.
“We see everything, from living rooms to naked bodies. Meta has that type of content in its databases,” the employee said. “People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.”
Business Today has reached out to Meta. The story will be updated once a response is received.
Human reviewers in the AI pipeline
According to the report, thousands of Sama employees in Nairobi manually label footage captured by Meta’s Ray-Ban smart glasses. These labelled clips are then used to train and improve the company’s AI systems.
Workers reportedly operate under strict confidentiality agreements that prohibit them from disclosing details about the content they review. Violating the agreement could result in losing their jobs.
Human review is not uncommon in the AI industry. Tech companies often rely on human annotators to label images, audio and video to help train machine-learning models.
Meta says the company has privacy protections in place before footage reaches human reviewers.
According to a BBC report, Meta says it uses an AI-powered blurring system to obscure identities before content is reviewed. “This data is first filtered to protect people's privacy,” the company said.
However, Svenska Dagbladet reported that the safeguards sometimes fail, allowing faces and private environments to remain visible to reviewers.
Privacy concerns mount
Meta’s own AI terms of service acknowledge that some interactions may be reviewed by humans.
“In some cases, Meta will review your interactions... and this review may be automated or manual (human),” the company states in its UK AI terms.
The reports have triggered concerns among regulators about how user data from wearable devices is collected and processed.
The UK’s data protection watchdog has reportedly contacted Meta seeking “urgent clarification.” Meanwhile, lawmakers in the European Union are questioning whether the company’s practices comply with GDPR, particularly around informed consent and the handling of sensitive personal data.
Another concern raised by the reports is the lack of transparency around how much footage is reviewed or how long the data is stored.
For users of AI-powered smart glasses, the reports highlight a broader question facing the tech industry: when devices constantly capture the world around us, who else might be watching?
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine