Key Takeaways
- Over 800 prominent figures demand a ban on superintelligent AI development.
- The call seeks a halt until safety is proven and public consent is secured.
- Signatories span royalty, tech pioneers, scientists, and political figures.
An unprecedented coalition of global figures is demanding an immediate prohibition on developing superintelligent AI, citing potential existential threats to humanity. The statement, signed by over 800 individuals including Prince Harry, Steve Bannon, and Nobel scientists, calls for a complete pause until robust safety measures and public approval are in place.
What the Statement Proposes
The declaration advocates for “a prohibition on the development of superintelligence” until two key conditions are met: “broad scientific consensus that it will be done safely and controllably” and “strong public buy-in.” This comes amid massive AI investments by tech giants like OpenAI, Google, and Meta, transforming entire industries.
Who Signed the Appeal
The diverse signatory list includes AI pioneer Geoffrey Hinton, former Joint Chiefs Chairman Mike Mullen, rapper Will.i.am, Apple co-founder Steve Wozniak, and Virgin’s Richard Branson. The coalition represents one of the broadest cross-society appeals for AI regulation to date.
The Organization Behind the Movement
The Future of Life Institute, which previously focused on nuclear and biotech risks, organized the statement. Executive director Anthony Aguirre expressed concern that AI development is outpacing public understanding. “We’ve had this path chosen for us by AI companies,” he noted, questioning whether society truly wants human-replacing AI systems.
“This is not a niche issue of some nerds in Silicon Valley. This is an issue for all of humanity,” Aguirre emphasized.
Tech Leaders’ Stance on Superintelligence
Notably absent were signatures from major tech CEOs actively pursuing superintelligence. Mark Zuckerberg declared it “now in sight,” while Elon Musk described its advent as “happening in real-time.” Sam Altman predicted superintelligence could arrive by 2030.
Public Opinion and Political Response
Americans remain divided on AI’s impact, with 44% expecting improved lives and 42% anticipating worse outcomes. The statement aims to spark global political dialogue, with Aguirre suggesting eventual international treaties similar to those governing other dangerous technologies.
The initiative emerges amid growing tensions between AI developers and safety advocates, highlighted by recent legal actions between OpenAI and the Future of Life Institute.



