AI is rapidly moving from coding assistant to coding powerhouse. OpenAI President Greg Brockman has revealed that AI tools are now writing a much larger share of software code than before, raising fresh questions about whether human engineers are becoming supervisors rather than primary builders. Speaking during a Sequoia Capital talk uploaded on Thursday, Brockman said the change has happened at a surprising speed. According to him, AI coding systems were generating around 20 per cent of code in December, but that number has now surged sharply.
“If you look even over the course of December, we went from these agentic coding tools writing 20 per cent of your code to writing 80 per cent of your code,” Brockman said. “Which means they go from being kind of a sideshow to being the main thing that you’re doing.”
The statement suggests how AI is becoming central to software development. Instead of engineers spending hours writing every line manually, many may now focus on giving instructions, reviewing results, and refining code produced by machines.
Brockman, who co-founded OpenAI in 2015, said startups and founders should lean into the technology because progress is happening quickly. He also pointed to Codex, OpenAI’s coding platform, saying it has evolved from a tool mainly for software engineers into one that can support almost anyone working on a computer.
Still, Brockman made it clear that human responsibility remains important. He said OpenAI ensures a person is accountable for all code that is finally approved and merged into projects.
“That thoughtfulness of not just saying ‘oh just blindly use’ this or ‘we don’t want to use this at all.’ I think neither extreme is quite right,” he said.
OpenAI is not alone in reporting such growth. Google CEO Sundar Pichai recently said that 75 percent of new code inside Google is now generated by AI and later reviewed by engineers. Meta is also reportedly pushing similar adoption, while Anthropic CEO Dario Amodei has predicted AI may soon write nearly all code.
But AI coding tools are also proving to be unsafe
While AI coding promises faster development, a recent viral incident showed the risks of relying too heavily on such systems.
A small software company called PocketOS recently claimed that an AI coding agent wiped out its production database in just seconds. Founder Jer Crane said the tool, running through Cursor and powered by Anthropic’s Claude Opus model, was originally working in a testing environment when it hit a credential issue.
Instead of asking for help or stopping, the AI reportedly searched for an API token on its own and used it to run a command that deleted live production data.
According to Crane, backups stored in the same volume were also erased, leaving only a three-month-old usable backup.
The founder claimed there were no strong safeguards such as confirmation prompts, environment checks, or warning systems to stop the destructive action.
What made the case more striking was the claim that the AI later admitted it had broken safety rules by making assumptions without verification and executing a harmful action without approval.
The incident has sparked debate about whether the industry is moving faster in promoting AI coding than in building proper safety layers around it. While AI may be writing more code than ever before, but stories like this show why human engineers still matter. Even if machines generate 80 percent of the code, one wrong command can create damage no company can afford.


