Image

Meta Workers Say They’re Seeing Disturbing Things Through Users’ Smart Glasses

Meta’s Ray Ban AI glasses have shot up in popularity in recent years, selling over seven million pairs in 2025 in a considerable jump over the two million it sold in 2023 and 2024 combined.

While the smart glasses have scored big with consumers, allowing them to record first-person footage through an integrated camera and microphone array, and analyzing the world around them through Meta’s AI model, the hardware has sparked a heated debate. Critics say enabling facial recognition in the glasses’ software could have dangerous implications, especially considering the militarization of law enforcement and Meta’s abysmal track record when it comes to ensuring the privacy of users.

And regardless of the wearer’s intention, much of the footage being recorded by the glasses is being sent to offshore contractors for data labeling, a widely-used preprocessing step in training new AI models in which human contractors are asked to review and annotate footage. It’s a laborious and highly resource-intensive process that tech companies often gloss over when discussing the prowess of their latest AI models.

The reality can be messy. Meta contractors based in Nairobi, Kenya, told Swedish newspapers Svenska Dagbladet and Göteborgs-Posten in a recently published joint investigation that they’re being told to review highly sensitive and intimate data.

“In some videos you can see someone going to the toilet, or getting undressed,” one contractor for a company called Sama said. “I don’t think they know, because if they knew they wouldn’t be recording.”

“I saw a video where a man puts the glasses on the bedside table and leaves the room,” one data annotator told the newspapers. “Shortly afterwards his wife comes in and changes her clothes.”

Other footage included imagery of people’s bank cards, users watching porn, or even filming entire “sex scenes.”

An employee added that they felt forced to watch and annotate or else risk losing their job.

“You understand that it is someone’s private life you are looking at, but at the same time you are just expected to carry out the work,” the employee said. “You are not supposed to question it. If you start asking questions, you are gone.”

Buried in Meta’s AI terms of use, the company reserves the right to have the company “review your interactions with AIs, including the content of your conversations with or messages to AIs, and this review can be automated or manual (human).”

The document also warned that users shouldn’t share information that “you don’t want the AIs to use and retain, such as information about sensitive topics.”

But given the kind of information data annotators are being asked to review, many users don’t appear to be aware of that last piece of advice.

Worst of all, owners of Meta’s AI glasses simply don’t have the option of making use of the AI features without agreeing to share data shared with Meta’s remote servers. And once the data is sent, it’s already often too late.

“Once the material has been fed into the models, the user in practice loses control over how it is used,” non-profit None Of Your Business data protection lawyer Kleanthi Sardeli told the Svenska Dagbladet and Göteborgs-Posten.

After two months of no replies, a Meta spokesperson referred the two Swedish newspapers to its terms of use and privacy policy.

“When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy,” the spokesperson said, in a terse statement.

It’s not just Meta using offshore data annotators in countries like Kenya, Colombia, and India to train their AI models. As Agence France-Presse reported last year, workers have had to put up with reviewing often gruesome crime scene images, and even dead bodies.

The trend is reminiscent of social media content moderation, a practice that has relied on exploitative labor in the developing world for many years now.

But with the advent of AI and wearable tech that can easily be used to record high-resolution footage simply by tapping a capacitive button next to your temple, the hidden human cost of data labeling has taken on a whole new meaning.

It’s a reality Meta would much prefer to bury in lengthy terms of service that likely only a handful will take the time to read.

“You think that if they knew about the extent of the data collection, no one would dare to use the glasses,” one annotator told the newspapers.

More on Meta: Meta’s Top AI Scientist Is Quitting as Zuckerberg’s Spending Spree Sputters

The post Meta Workers Say They’re Seeing Disturbing Things Through Users’ Smart Glasses appeared first on Futurism.

Releated Posts

Grok Just Issued a Brutal Beatdown to Elon Musk

If you ever needed proof that money can’t buy intelligence, just watch the richest man in the world…

May 12, 2026 3 min read

Google Alarmed by Formidable AI-Powered Zero-Day Cyberattack

Google was rattled by a cyberattack that used AI to unearth a major flaw in its software that…

May 12, 2026 3 min read

Angry Mom Defeats Entire AI Data Center

Jayne Black is a Wisconsin-area environmentalist and mother of four. An organizer with the group Moms Clean Air…

May 12, 2026 3 min read

Graduation Speaker Shocked When She’s Loudly Booed by Students for Saying AI Is the Future

This year’s commencement speaker at the University of Central Florida was visibly floored after she extolled AI as…

May 11, 2026 4 min read