Key Takeaways
- AI pioneer Geoffrey Hinton joins global figures calling for ban on superintelligent AI
 - Future of Life Institute statement demands safety-first approach to AI development
 - Tech companies continue aggressive pursuit of superintelligence despite warnings
 
Geoffrey Hinton, widely regarded as the ‘Father of AI’, has joined prominent global figures in demanding a ban on developing superintelligent AI systems. The call comes through a statement from the nonprofit Future of Life Institute, which insists that such advanced AI development should halt until scientific consensus confirms it can be done safely.
The coalition includes Apple co-founder Steve Wozniak, Prince Harry, economist Daron Acemoglu, and former US National Security Adviser Susan Rice. This marks the institute’s second major initiative, following its 2023 letter calling for a six-month AI development pause, though the current campaign specifically targets superintelligence risks.
Voices Against Unchecked AI Development
Prince Harry, Duke of Sussex, in his statement said, “The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer.”
Actor and Filmmaker Joseph Gordon-Levitt said, “Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence’.”
Johnnie Moore, President, Congress of Christian Leaders and White House evangelical adviser in a statement said, “We should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control. Creating superintelligent machines is not only unacceptably dangerous and immoral, but also completely unnecessary.”
Tech Giants Push Forward
Despite the warnings, major technology companies are accelerating their race toward artificial general intelligence (AGI) and beyond. Meta’s Mark Zuckerberg declared last year that superintelligence is now within reach, while OpenAI CEO Sam Altman predicts it could emerge by 2030. Companies are investing hundreds of billions in AI infrastructure this year alone.
Notably, even an OpenAI employee, Leo Gao, has supported the appeal—a rare move from within an organization that Altman describes as focused on superintelligence research. The industry’s continued momentum suggests the latest warnings may face similar challenges as previous attempts to slow AI development.


                                    
