Key Takeaways
- Prince Harry joins diverse coalition calling for ban on AI superintelligence
- Future of Life Institute statement demands safety consensus and public approval
- Tech giants Google, Meta, and OpenAI developing systems that could outperform humans
- Warnings include economic displacement, loss of freedom, and potential human extinction
Prince Harry has formed an unlikely alliance with evangelical Christian leaders, conservative commentators, musicians, and tech experts in a bold plea to halt the development of super-powerful AI systems. The diverse coalition warns that artificial intelligence being developed by companies like Google, Meta, and OpenAI could threaten humanity’s future if left unchecked.
What the Statement Demands
Organized by the Future of Life Institute, the statement calls for a complete prohibition on superintelligence development until two critical conditions are met. First, there must be broad scientific consensus that it can be developed safely and controllably. Second, there must be strong public buy-in before any further progress.
Notable signatories include Donald Trump’s former strategist Steve Bannon, musicians Will.i.am and Kate Bush, actor Stephen Fry, and billionaire Richard Branson. The group represents an unprecedented cross-section of political and cultural figures united on AI safety.
The Grave Concerns About Superintelligence
The Future of Life Institute stated: “Many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks.”
This has raised alarms about “human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
Prince Harry added his personal perspective: “The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer.”
Previous Warnings About AI Dangers
This isn’t the first time prominent figures have sounded alarms about artificial intelligence. In March 2023, tech leaders including Elon Musk and Apple co-founder Steve Wozniak signed an open letter warning of “out-of-control” AI systems, also organized by the Future of Life Institute.
Another high-profile statement emerged in May 2023 from the Center for AI Safety, co-signed by Google DeepMind and OpenAI CEOs Demis Hassabis and Sam Altman. It declared that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Mainstream Criticism of AI Development
Despite these warnings, there has been no pause in AI development. The latest statement’s diverse signatory list aims to appeal beyond the tech research community.
Max Tegmark, president of the Future of Life Institute and MIT professor, observed: “In the past, it’s mostly been the nerds versus the nerds. I feel what we’re really seeing here is how the criticism has gone very mainstream.”



