OpenAI’s Sora app is making it terrifyingly easy to create convincing fake videos, causing widespread confusion on social media and accelerating the erosion of trust in digital content.
Key Takeaways
- Sora generates realistic AI videos from text prompts, flooding platforms like TikTok and Instagram
- Experts warn this contributes to “liar’s dividend” – making it easier to dismiss real evidence as fake
- The app’s “cameo” feature allows deepfaking of ordinary people, creating serious harassment risks
Since Sora’s September release, social media feeds have been flooded with AI-generated content ranging from grandma feeding bears to fake celebrity videos. Even with watermarks, users struggle to distinguish reality from fabrication.
“I’m at the point that I don’t even know what’s AI,” reads one top TikTok comment.
The Scale of the Problem
Jeremy Carrasco, a technical producer who has become an expert in spotting AI videos, says the volume has exploded. “Six months ago, you wouldn’t see a single AI video on your feed. Now you might see 10 an hour, or one every minute,” he told HuffPost.
The accessibility drives this surge – unlike Google’s Veo, Sora doesn’t require payment for full capabilities. “Now that barrier of entry is just having an invite code,” Carrasco noted, adding that watermarks are easily cropped out.
How to Spot Sora Videos
Carrasco identifies several telltale signs:
- Blurry, staticky textures on hair and clothes
- Contextual inconsistencies – like a “conservative church” with a “liberal pastor”
- Suspicious account histories filled with similar content
He advises asking: “Who posted this? Why did they post this? Why is it engaging?”
The ‘Liar’s Dividend’ Threat
Hany Farid, UC Berkeley computer science professor, explains Sora “100%” contributes to the “liar’s dividend” – where convincing fakes make it easier to dismiss genuine evidence.
“If you create very convincing images and video that are fake, of course, then when something real is brought to you – a police body cam, a video of a human rights violation, a president saying something illegal – well, then you can just deny reality by saying ‘deepfake,'” Farid explained.
Deepfaking Ordinary People
Perhaps more unsettling than celebrity deepfakes is Sora’s “cameo” feature, which lets users scan faces and voices to create videos of real people.
YouTuber IShowSpeed discovered the risks when strangers created videos of him kissing fans, visiting countries he’d never been to, and declaring he was gay. “Why does this look too real? Bro, no, that’s like, my face,” he said during a livestream before changing his settings to private.
Eva Galperin of Electronic Frontier Foundation warns this is “a fairly mild version of the kind of outcomes that we have seen and that we can expect.”
Why Personal Deepfakes Are Dangerous
Galperin notes that Sora’s consent tools don’t account for changing relationships: “You could have a bunch of harassing videos made by an abusive ex or an angry former friend. You will not be able to stop them until after the video is already out there.”
The harm isn’t distributed equally. “If you are a Sam Altman, and you are extremely well-known and rich and white and a man, then a surveillance video of you shoplifting at Target is funny,” Galperin said. “But there are many populations of people for whom that is not a joke.”
Expert Recommendations
Most experts advise against uploading your biometric data to Sora. Solomon Messing, who created a viral cat video using the app, draws the line at personal content: “The ability to generate realistic video of your friends doing anything that doesn’t trigger a guardrail makes me super uncomfortable.”
Carrasco suggests considering the long-term risks: “You do not want to normalize you being deepfaked.”
As AI video technology becomes increasingly accessible, the line between reality and fabrication continues to blur, creating new challenges for digital literacy and personal security.




