Broadcom has cuddled up with OpenAI as the ChatGPT outfit looks for ever more help building out the vast infrastructure it needs to deliver on its dreams of advanced intelligence – and possibly even a profit some day.
The CEOs of both companies revealed today they have been jointly working on custom silicon, which they plan to begin deploying late next year, in a collaboration sized at 10 GW of custom AI accelerators.
In a statement, Broadcom and OpenAI said: “By designing its own chips and systems, OpenAI can embed what it’s learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence.”
The racks will be “scaled entirely with Ethernet and other connectivity solutions from Broadcom… with deployments across OpenAI’s facilities and partner datacenters.”
Sam Altman, co-founder and CEO of OpenAI, said: “Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.”
In a podcast accompanying the announcement and featuring both Hock Tan and Altman, the OpenAI chief said that 10 GW would “serve the needs of the world to use advanced intelligence.”
Altman said the agreement covered a full system, apparently geared towards inference. He added that it turned out Broadcom was also “incredible” at designing systems and 10 GW was an astonishing capacity on top of what it’s already building.
The GPUs of today were amazing but with the combination of model, chip, and rack, “we will be able to wring out so much more intelligence per watt,” he continued.
Charlie Kawwas, president of the Semiconductor Solutions Group for Broadcom Inc, added: “The racks include Broadcom’s end-to-end portfolio of Ethernet, PCIe and optical connectivity solutions, reaffirming our AI infrastructure portfolio leadership.”
OpenAI president Greg Brockman said it had been able to apply its own models to designing the chip. The model has come up with optimizations, he said. Although humans could have done this, he admitted, doing it this way accelerated the process.
Brockman also envisaged a world where every human had their own accelerator working for them behind the scenes, and partnering with Broadcom will bring this nirvana quicker.
“There’s 10 billion humans. We are nowhere near being able to build 10 billion chips, and so there’s a long way to go before we are able to saturate not just the demand, but what humanity really deserves.”
We’re all going to be paying AI’s Godzilla-sized power bills
OpenAI GPT-5: great taste, less filling, now with 30% less bias
Amazon’s Quick Suite is like agentic AI training wheels for enterprises
Google rearranges Agentspace into Gemini Enterprise
Altman described the AI buildout as the biggest joint industrial project in human history. He drew a comparison with the proportion of global GDP that went into the construction of the Great Wall. Though maybe that’s not the best comparison, as it was largely built by forced labor to keep the barbarians out, took centuries to deliver, and ultimately the barbarians found their way in, or around it, anyway.
Unlike other recent OpenAI deals, it seems its purchases of Broadcom kit won’t be linked to any other financial entanglements between the companies.
Earlier this month, AMD announced a 6 GW agreement to power OpenAI’s AI infrastructure across multiple generations of AMD Instinct GPUs. The contract was accompanied by a warrant for up to 160 million shares of AMD common stock, structured to pay out as provided specific staged targets are met.
In September, Nvidia announced a 10 GW deal with OpenAI, with an accompanying (up to) $100 billion investment by the chip maker.
The same month, OpenAI said it will pay Oracle $300 billion over five years to build out 5 GW of capacity. OpenAI’s ARR as of June was around $10 billion.
Spinning a web of interdependencies means multiple billion dollar technology organizations have a vested interest in OpenAI succeeding – the genAI pioneer says it won’t be cashflow positive for four more years, and expects to spend a lot more on DC infrastructure during those years.
Market watchers are nervous that these sorts of deals indicate some sort of AI bubble, as companies bandy around phrases such as gigawatts and tokens, instead of boring old terms such as revenues or income.
Some have even drawn parallels with the dotcom era, though clearly there’s no comparison. Back then companies were bandying around words like eyeballs and stickiness. You don’t need ChatGPT to tell you that we’re clearly talking oranges and lemons here. ®