- OpenAI has partnered with Google Cloud to boost computing power for ChatGPT amid rising infrastructure demands.
- The move marks a shift to a multi-cloud strategy, reducing dependence on Microsoft Azure and enhancing global scalability.
OpenAI has entered into a major cloud partnership with Google Cloud to meet the rising computational demands of its AI models, including ChatGPT. This move, finalized in May 2025, reflects OpenAI’s ongoing strategy to diversify its cloud infrastructure and avoid overreliance on a single provider.
Historically, OpenAI has leaned heavily on Microsoft Azure, thanks to Microsoft’s multi-billion-dollar investment and deep integration with OpenAI’s services. However, with the explosive growth of generative AI and increasing demands for high-performance GPUs, OpenAI has been aggressively expanding its cloud partnerships. The addition of Google Cloud now places the company in a “multi-cloud” model, also involving Oracle and CoreWeave, which recently secured a $12 billion agreement with OpenAI.
By tapping into Google’s global data center network—spanning the U.S., Europe, and Asia—OpenAI gains greater flexibility to manage the heavy compute workloads needed for training and running its large language models. Google, for its part, strengthens its cloud business by onboarding one of the world’s leading AI developers as a client, which not only enhances its credibility but also diversifies its cloud clientele beyond traditional enterprise workloads.
This deal marks a significant step in the ongoing arms race among tech giants to dominate cloud-based AI infrastructure. OpenAI’s multi-cloud strategy ensures resilience, scalability, and availability for its services across different regions and use cases. It also allows the company to better respond to surges in demand for ChatGPT and its API-based offerings, which serve millions of users and enterprise clients daily.
The partnership underscores a broader shift in the tech industry, where high-performance computing for AI is becoming a core battleground. For OpenAI, spreading its workload across multiple providers could mitigate risks, lower costs, and boost its capacity to innovate and iterate at speed.