Key Takeaways
- OpenAI signs $38 billion multi-year deal with Amazon Web Services
- AWS to provide massive GPU and CPU infrastructure for AI scaling
- Full deployment expected by end of 2026 with expansion through 2027
- Partnership builds on existing Amazon Bedrock collaboration
OpenAI, the creator of ChatGPT, has entered into a landmark $38 billion agreement with Amazon Web Services (AWS) to power its artificial intelligence systems. The multi-year partnership will leverage AWS’s cloud infrastructure to run and expand OpenAI’s core AI workloads, effective immediately.
Massive Compute Infrastructure
Under the agreement, AWS will deploy hundreds of thousands of NVIDIA GPUs and scale to tens of millions of CPUs over the next seven years. The complete infrastructure capacity is scheduled for deployment before the end of 2026, with provisions for further expansion into 2027 and beyond.
The custom-built infrastructure features advanced architecture optimized for peak performance and efficiency. By clustering NVIDIA GB200 and GB300 GPUs using Amazon EC2 UltraServers, AWS will deliver low-latency performance across interconnected systems. This configuration enables OpenAI to manage diverse workloads with adaptable scaling as requirements evolve.
Leadership Perspectives
“Scaling frontier AI requires massive, reliable compute,” said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”
Matt Garman, CEO of AWS, added, “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions.”
Building on Existing Collaboration
This major partnership extends previous cooperation between the companies. Earlier this year, OpenAI’s open weight foundation models became available on Amazon Bedrock, providing access to millions of AWS customers.






