Key Takeaways
- OpenAI partners with Broadcom to build custom AI chips, aiming for 10 gigawatts of compute capacity.
- Deployment of the new accelerators is expected to begin late next year.
- The move reduces OpenAI’s reliance on Nvidia and AMD amid soaring AI infrastructure costs.
OpenAI is taking control of its AI future by designing its own chips in partnership with Broadcom, signaling that relying solely on giants like Nvidia and AMD is no longer sufficient to meet explosive demand.
The companies announced they will jointly develop and deploy racks of custom accelerators, with deployment starting late next year. This ambitious plan targets 10 gigawatts of compute capacity, highlighting the massive scale of current AI demand.
Driving Down Costs, Boosting Efficiency
Sam Altman, CEO of OpenAI, explained that this partnership aims to reduce computing costs while improving efficiency. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models, all of that,” Altman said in a podcast with Broadcom executives. He described the collaboration as providing “a gigantic amount of computing infrastructure” for advanced intelligence development.
Beyond Chips: Complete System Design
The custom systems will extend beyond processors to include networking and memory solutions built on Broadcom’s Ethernet stack, specifically tailored for OpenAI’s workloads. This vertical integration comes at a critical time when data center costs are skyrocketing.
Industry estimates suggest a single 1-gigawatt data center can cost up to $50 billion, with approximately $35 billion typically spent on chips—mostly flowing to Nvidia.
18-Month Collaboration Goes Public
The Broadcom partnership formalizes an 18-month relationship where the companies quietly developed chips together. This announcement marks their first public disclosure of the collaboration’s scale.
This fits into OpenAI’s broader strategy: within just three weeks, the company has announced computing commitments totaling approximately 33 gigawatts through partnerships with Nvidia, Oracle, AMD, and now Broadcom. Currently, OpenAI operates on just over 2 gigawatts, powering everything from ChatGPT to its AI video creation service Sora.
AI Designing AI Chips
Greg Brockman, president of OpenAI, revealed that the company used its own AI models to design the chips more efficiently. “We’ve been able to get massive area reductions,” he said, noting that the models optimized components beyond human engineering capabilities.
Broadcom’s AI Boom
Broadcom has emerged as a major beneficiary of the AI revolution. Its custom accelerators, called XPUs, are already deployed by tech giants including Google, Meta, and ByteDance. The company’s stock has surged more than 50% this year, following a doubling in 2024, pushing its market capitalization past $1.5 trillion.
Controlling the AI Destiny
Hock Tan, CEO of Broadcom, emphasized that OpenAI is building “the most-advanced” frontier models. “You continue to need compute capacity—the best, latest compute capacity—as you progress in a road map towards a better and better frontier model and towards superintelligence,” he said. “If you do your own chips, you control your destiny.”
Altman confirmed this vision, stating the 10-gigawatt plan is merely the beginning. “Even though it’s vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price—the world will absorb it super fast and just find incredible new things to use it for,” he said.



