Key Takeaways
- Microsoft gains access to OpenAI’s chip designs and hardware research until 2030
- The partnership extension includes using OpenAI models through 2032
- New Fairwater datacentres form an AI superfactory network with advanced NVIDIA systems
- Microsoft aims to own the complete AI stack from silicon to software
Microsoft is significantly advancing its AI hardware capabilities by integrating OpenAI’s custom chip designs into its semiconductor strategy. CEO Satya Nadella confirmed this strategic move that deepens the partnership between the two companies into the silicon layer of AI computing.
Extended Partnership Through 2030
Speaking on a podcast, Nadella revealed that Microsoft will integrate OpenAI’s system-level hardware innovations directly into its in-house chip development. “We now have access to OpenAI’s chip and hardware research through 2030,” he stated. The revised agreement also ensures Microsoft continues using OpenAI’s models through 2032.
OpenAI has been co-developing specialized AI processors and networking hardware with Broadcom, indicating its expansion beyond software. Microsoft plans to industrialize these chip designs for large-scale deployment before extending them under its own intellectual property.
Strengthened Strategic Alliance
This expanded cooperation creates a mutually beneficial relationship where Microsoft gains cutting-edge hardware tailored to OpenAI’s model-training requirements, while OpenAI leverages Microsoft’s global infrastructure for scaling. Nadella described the alignment as “strategic,” noting it will accelerate Microsoft’s semiconductor ambitions.
Fairwater Datacentre Network
Microsoft’s Fairwater datacentres represent a new class of AI infrastructure that functions as interconnected nodes in a massive AI superfactory. The Atlanta facility already hosts one such datacentre featuring advanced chip architecture delivering the highest throughput per rack available today.
The site incorporates NVIDIA GB200 NVL72 rack-scale systems capable of scaling to hundreds of thousands of Blackwell GPUs. Notably, it uses advanced liquid cooling that consumes nearly zero water, contrasting sharply with traditional datacentres that consume millions of litres.
Smarter Systems Approach
According to Scott Guthrie, Microsoft’s executive vice president of Cloud + AI, “Leading in AI isn’t just about adding more GPUs, it’s about building the infrastructure that makes them work together as one system.”
He emphasized that Fairwater reflects years of end-to-end engineering focused on real-world performance rather than theoretical capacity. This integrated approach differentiates Microsoft from rivals Google and Amazon, who often keep their custom AI chips siloed within proprietary stacks.
Building AI’s Foundation
Microsoft’s journey in AI infrastructure dates back to its first supercomputer built with OpenAI in 2019. Fairwater represents the culmination of years of refinement across every layer of AI infrastructure.
With OpenAI’s hardware blueprints now accessible, Microsoft is positioned to accelerate this evolution. The company’s latest datacentres are designed for both training frontier models and powering inference for millions of daily users.
As Nadella summarized, Microsoft’s goal extends beyond adding computational power to owning the complete innovation stack from silicon to supercomputer to software.



