Apple privately threatened to remove the Grok app from the App Store earlier this year after users generated sexualised deepfakes of women and children, according to a report by NBC News. The report notes Apple revealed the details in a recent letter to US senators.
Notably, after the controversy around non-consensual sexual images picked up pace, there was a lot of pressure on Apple to remove the X and Grok apps from its App Store over violation of its policies. A group of Democratic senators had even written to Apple Chief Tim Cook to suspend the two apps from its App Store for spreading child sexual abuse material.
However, the new NBC report now notes that while Apple remained silent during the whole controversy, the company had internally “found X and Grok in violation of its guidelines”. The Cupertino-based tech giant had also contacted the xAI team and asked the developers for a clear plan to improve content moderation.
The report further notes that X went on to submit an update for Grok, which Apple rejected because the “changes didn’t go far enough.” Apple reportedly only accepted the changes after the company submitted a second round of versions for both the apps but still noted that the Grok app “remained out of compliance”.
In a letter to US senators seen by NBC News via 9to5Mac, the company wrote, “Apple reviewed the next submissions made by the developers and determined that X had substantially resolved its violations, but the Grok app remained out of compliance.”
“As a result, we rejected the Grok submission and notified the developer that additional changes to remedy the violation would be required, or the app could be removed from the App Store. […] Following further engagement and changes by the Grok developer, we determined that Grok had substantially improved and therefore approved its latest submission,” it added.
X responds to the controversy:
The social media behemoth has responded to the new NBC report via the official X Safety handle, categorically stating that it has extensive safeguards in place for preventing misuse by users.
“We strictly prohibit users from generating non-consensual explicit deepfakes and from using our tools to undress real people. xAI has extensive safeguards in place to prevent such misuse, such as continuous monitoring of public usage, analysis of evasion attempts in real time, frequent model updates, prompt filters, and additional safeguards,” the X Safety handle wrote.


