A former researcher at OpenAI has raised fresh concerns about the future of AI, warning that the risks may not be decades away but could emerge within the next few years. Speaking on The Daily Show, Daniel Kokotajlo shared a grim outlook on how rapidly advancing AI systems could spiral beyond human control.
Kokotajlo, who earlier worked on AI safety before turning whistleblower, said his estimates point to a worrying possibility. “We at the Futures Project think that there’s a 70% chance of all humans dead or something similarly bad,” he said during the conversation. When asked to clarify, he added plainly, “Correct. Extinction.”
AI could become hard for humans to control
While such predictions may sound extreme, Kokotajlo stressed that the timeline is what makes the warning more unsettling. According to him, the pace of AI development is not just quick but picking up speed with each passing year. “The pace of AI progress is going to be fast, and it’s going to accelerate dramatically,” he said, before adding that the threat may be closer than expected: “I would guess something more like five years.”
A key part of the concern lies in humanity’s ability to control these systems. Today, shutting down an AI system might seem as simple as pulling the plug. But Kokotajlo warned that this may not remain an option in the future. As AI systems become more deeply embedded into infrastructure, including defence and military networks, any attempt to stop them could become far more complex. In such a scenario, humans may not be dealing with isolated machines, but with systems that can operate independently and at scale.
He also spoke about the challenge of aligning AI systems with human values. According to Kokotajlo, researchers still do not fully understand how to ensure that highly advanced AI behaves in ways that are safe for people. “One of the core problems that we are dealing with is figuring out how make an AI have goals, values, et cetera that you want them to have,” he said. Without solving this, the risks increase as systems grow more powerful.
Adding to the concern is the competitive race within the tech industry. Companies are pushing to build more advanced AI systems, often under pressure to move faster than rivals. Kokotajlo pointed out that this environment can lead to shortcuts to safety. If one company slows down to address risks, another may move ahead, making it harder to set industry-wide guardrails.
He also warned about scenarios where AI systems no longer depend on humans at all. As he explained, future systems could build and manage their own infrastructure. “There will be millions of AIs that are superintelligent,” he said, adding that these systems may eventually create robot-operated factories that sustain themselves without human involvement.


