Skip to content

So, OpenAI needs Microsoft, Oracle, Google and Amazon to Power It?

  • 4 min read

Author

Simon Harrison

Simon Harrison

Analyst and Executive Partner

Simon Harrison is an accomplished analyst and technology strategist with over 30 years of experience spanning systems engineering, technical consulting, product innovation, and global senior leadership. He began his career as a UNIX systems engineer and consultant before advancing to senior roles, including SVP of Product Marketing and award-winning Chief Marketing Officer, driving growth for a multibillion-dollar company. A former Gartner analyst and Magic Quadrant author, Simon remains an active industry analyst and executive advisor, helping companies sharpen their strategy, messaging, and go-to-market performance. Today, as founder of Actionary, he delivers board-level insight on AI, customer engagement, and platform innovation, drawing on deep technical roots and a proven track record of helping companies achieve their goals at scale.

There’s been a string of announcements based on OpenAI’s compute ambitions culminating in the article out today detailing the AWS and OpenAI multi-year strategic partnership.

Back in June ’24 Oracle told the world about its partnership with OpenAI in this article, quoting, “Oracle, Microsoft, and OpenAl are partnering to extend the Microsoft Azure Al platform to Oracle Cloud Infrastructure (OCI) to provide additional capacity for OpenAl.” Basically a nicer way of saying what The Verge reported the next day quoting, “OpenAI needs more compute than Microsoft alone can give if it wants to keep up with demand and prevent future ChatGPT outages.”

A year later OpenAI did a deal with Google suggesting it, “…marks OpenAI’s latest move to diversify its compute sources beyond its major supporter Microsoft” according to this Reuters article. Then TechCrunch published an article on September 10th ’25 quoting that, “Oracle signed a deal with OpenAI for the company to purchase $300 billion worth of compute power over a span of five years” and that according to reporting from the Wall Street Journal, “OpenAI would start purchasing this compute in 2027. It also reported that OpenAI had signed a cloud deal with Google in the spring, according to Reuters, “… despite the fact that the two companies are racing against each other for AI supremacy.”

As if that wasn’t enough, we’re now learning in this November 3rd ’25 article that, “OpenAI will immediately start utilizing AWS compute as part of this partnership, with all capacity targeted to be deployed before the end of 2026….” coincidentally right up to the moment when Oracle is saying it will be ready.

What’s The Takeaway?

It looks like Sam has a plan here. Oracle is the forward-looking commitment, AWS is the now moment, Google plays the chip rental and compute swing given it is a direct competitor, and Microsoft remains the legacy platform that keeps the lights on. Forget multi-cloud, this is all-the-clouds.

If anyone thought the AI wave had crested when everyone started talking about Agentic AI, imagine what happens with all of this compute. The models get bigger. The training runs longer. The outputs become more autonomous. It’s exciting and a bit scary at the same time. Because with great compute comes great responsibility.

I’m reminded of Kara Swisher’s keynote at that Zendesk AI Summit toward the end of last year where she said, “26% of San Francisco’s energy is now being consumed based on AI.” If a single city was already feeling the AI impact to such a degree then, it’s no wonder Sam is targeting so much resource. But, with that, as Swisher also cited, “AI and environmental innovation go hand-in-hand: we need to rethink data center designs and accelerate investments in clean energy sources.”

The architects of the world’s data centers, the companies building the power grids, the designers of cooling systems and rack density, just moved from backstage to starring role. As they say, during a gold rush, sell shovels. If you want to sell AI become experts in defining centers that need to power it. Not quite an analogy that works, but I’m rolling with it anyway.

So here’s what it actually all means:

  • The era of “AI in the cloud” is evolving into “cloud as AI.” The cloud isn’t just a delivery channel anymore; it’s the compute backbone of intelligence itself.
  • The battle for compute is front-and-center. Ownership of infrastructure becomes strategic as much as ownership of models.
  • Energy and sustainability are existential. Scale up AI without scaling down consequences, and the bill comes due.
  • And finally: if you thought disruption was fast before, imagine when these hyperscale setups are humming full tilt. The rules change. The players shift. The ambition skyrockets.

The next frontier of AI isn’t just about smarter models. It’s about smarter infrastructure. It’s about the planet’s power grid meeting the algorithmic grid. And if you thought you were witnessing something big before, buckle up. Because the compute tsunami is coming. And it will look like all of the clouds, working together.