Meta Ray-ban glasses: A new investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has raised serious privacy concerns about Meta’s Ray-Ban Meta Smart Glasses. According to the report, workers hired by contractors in Kenya are reviewing private videos recorded by the AI-powered glasses, including footage taken inside homes, bathrooms, and other intimate spaces.
The investigation claims that the glasses can capture extremely personal moments — from people undressing to private family activities — which may later be viewed by human reviewers tasked with training the company’s artificial intelligence systems.
Human reviewers watch recorded footage
The report says Meta sends certain recordings from the glasses to human data annotators who help train the AI system. When users activate the device using the “Hey Meta” voice command and ask it to analyse something, the captured video is uploaded to Meta’s servers.
From there, the footage is routed to Sama, a subcontractor operating in Nairobi, Kenya. Workers at the facility reportedly watch and label objects in the clips so the AI can learn to recognise different environments and activities.
Workers told Swedish journalists that they sometimes see highly sensitive content. This includes people in bedrooms, individuals using bathrooms, and even intimate situations where people are unaware they are being recorded.
One worker was quoted as saying, “We see everything – from living rooms to naked bodies.”
Millions of devices generating training data
The report claims that Meta sold around 7 million pairs of Ray-Ban smart glasses in 2025, creating a large stream of video data used for AI training.
Because the glasses can record short videos while being worn, they may capture private moments unintentionally. Workers reviewing the clips said they have also seen sensitive information such as bank card numbers or personal details accidentally appearing in the recordings.
Meta states in its terms of service that it may conduct “manual (human) review” of interactions to improve the user experience, which legally allows the company to send certain recordings for human analysis.
Concerns over failed anonymisation
Meta says the system automatically blurs faces in training data to protect privacy. However, workers interviewed in the investigation claimed that this feature does not always work properly.
According to them, faces that are supposed to be anonymised sometimes remain visible due to lighting conditions or technical limitations.
Social media reactions over privacy claims
The company promotes the glasses as being “designed with your privacy in mind.” One visible privacy feature is a small LED light on the frame that turns on when recording.
However, critics argue that many people may not notice the indicator. Social media users also mocked the situation, with one user commenting that artificial intelligence sometimes feels like “a guy in Nairobi watching me brush my teeth.”
Another comment read, “Designed with your privacy in mind’ has to be the most misleading sentence of the decade. This is just surveillance capitalism with a tiny LED for vibes.”
Meta has not publicly disputed the existence of human reviewers but said such processes are used only to improve the technology and user experience.


