Elon Musk imagines a Terawatt of compute, or about 1.43 billion GPUs and 2.1x the power output of the US

TribeNews
7 Min Read

By now, we’re all pretty used to Elon Musk’s big dreams across his various companies. Tesla is transitioning the world to sustainable energy, and real-world AI in the form of humanoid and driverless robots, SpaceX is trying to make humanity multi-planetary (Mars) and Neuralink is trying to revolutionlise healthcare with a generalised brain interface to name a few.

The latest posts from Musk are on a whole other level – Musk posts that he’s been considering what the fastest way to bring a terawatt of compute online would be.

- Advertisement -

To understand just how large that proposal is, the global compute capacity today, is likely in the range of 1 to 10 zettaFLOPS (10^21 to 10^22 FLOPS), dominated by data centers, cloud providers, and AI infrastructure in the U.S., China, and Europe.

A terawatt of compute, assuming 10^11 to 10^12 FLOPS/W, translates to 10^23 to 10^24 FLOPS (100 zettaFLOPS to 1 yottaFLOPS). This is 10 to 1000 times the estimated global compute capacity of 1 to 10 zettaFLOPS (10^21 to 10^22 FLOPS) in 2025.

- Advertisement -

This would require a massive increase in power infrastructure, equivalent to 20x current data center power or 1% of global electricity, making it a great theoretical exercise, rather than a practical reality today.

Musk shared an estimate that suggests the power requirement would be the equivalent of all power produced in America today. With some help from Grok, it looks like that may be a little on the low side.

- Advertisement -

A terawatt of compute requires 1 TW of power, producing 10^23 to 10^24 FLOPS (100 zettaFLOPS to 1 yottaFLOPS). The U.S. generates ~4,200 TWh/year in 2025, equivalent to an average power of 0.48 TW.

Thus, 1 TW of compute power is around 2.1 times the entire U.S. average electrical power output and ~77% of its installed capacity (1.3 TW).

Been thinking about the fastest way to bring a terawatt of compute online.

- Advertisement -

That is roughly equivalent to all electrical power produced in America today.

— gorklon rust (@elonmusk) May 12, 2025

Musk’s current pinned post on X is a reply to a post by Jesse Peltan, which appears to have started Musk down this line of thinking. In the reply, Musk suggests the world would need a LOT more solar to achieve this.

- Advertisement -

Then energy harnessed will increase perhaps a billionfold if we make it to Kardashev II, with space solar power, and another billionfold if we harness the energy of our galaxy.

The Kardashev Scale, introduced by Nikolai Kardashev in 1964, measures a civilization’s technological advancement by its energy consumption, with Type I harnessing planetary energy (4 × 10^12 W), Type II stellar energy (4 × 10^26 W), and Type III galactic energy (4 × 10^37 W).

The post envisions humanity progressing from Type I to Type III, potentially increasing energy use by a trillionfold through solar and space-based power.

As we progress along the Kardashev Scale, energy harnessed on Earth will increase a hundredfold and will mostly be solar aka fusion aka starlight.

Then energy harnessed will increase perhaps a billionfold if we make it to Kardashev II, with space solar power, and another… https://t.co/0cHovopB9l

— gorklon rust (@elonmusk) May 12, 2025

Musk provides another reference to help our tiny brains get around this ridiculously large figure, suggesting that the energy produced by around 10 Starships would be similar to energy level required to deliver 1 Terawatt of computational power.

One thing we’re not considering here is what you’d do with that compute capacity. AI is certainly the hungriest of hungry beasts and as yet, it’s ability to improve does not appear to be impacted by scaling laws. This ultimately means the sophistication and capability of AI is limited by your wallet (and power).

Imagine a network of large-scale data centres strategically located across the globe, perhaps in areas with abundant renewable energy sources like solar or geothermal.

What would it cost?

Let’s for a minute entertain what achieving 1TW of energy may cost.

The cost of operating 1 terawatt of compute (10^23 to 10^24 FLOPS) is approximately $7.3 trillion to $12.9 trillion per year, with a midpoint of $10.1 trillion/year. This includes:

Electricity: $700 billion-$1.14 trillion/year (midpoint $911 billion, assuming PUE 1.3, $0.08/kWh).

Capital (hardware, data centers, maintenance): $6.62-$11.74 trillion/year (midpoint $9.18 trillion, assuming mixed hardware and 4-year lifespan).

This cost is ~10% of global GDP, equivalent to 25-30x current global data center spending or 2-3x U.S. electricity consumption, making it economically and energetically impractical today.

As we consider compute and what’s possible, we also need to consider what this actually looks like in terms of hardware.

Let’s assume the compute is GPU-based (e.g., NVIDIA H100 or equivalent in 2025). A H100 consumes ~700 W and costs ~$30,000 (2023 price; assume $25,000 in 2025 due to market scaling).

The number of GPUs you would need to create 1 TW of compute, is around 1.43 billion GPUs (10^12 W ÷ 700 W). Given the largest buys from Nvidia around measured in the hundreds of thousands today, this is more than 1000x the amount, again fun to think about, but far from practical.

It’s fun to dream sometimes.

Leave a Comment
Ads Blocker Image Powered by Code Help Pro

Ads Blocker Detected & This Is Prohibited!!!

We have detected that you are using extensions to block ads and you are also not using our official app. Your Account Have been Flagged and reported, pending de-activation & All your earning will be wiped out. Please turn off the software to continue

You cannot copy content of this app