You can’t have not seen the latest Silicon Valley obsession — “hey guys, these data centres are getting awfully big and energy hungry down here on Earth, what if we were to build them in space instead?” And everyone who’s anyone is talking about it. Sundar Pichai has Google Research burning cycles on looking into it. The World Economic Forum recently published a piece on it too, putting all of the complexity at the end — not dishonest, but perhaps putting hype before focus.

More worryingly still, it seems like the merger of SpaceX and xAI is predicated on the idea that this grand plan is a goer.

Why this matters to me

Aside from being a physicist by training and a data and AI person by profession, I’m also a hobby rocket nerd. I realise even from my experience of trying to fly models that doing stuff on the ground is a lot more simple than trying to get it to work up high. Simple stuff like using onboard electronics or deploying parachutes is complex 2km up, let alone 35,000km above the Earth’s surface. The idea that you would voluntarily do stuff in space seems nuts to me.

The dream

The main argument goes as follows:

  • IF we can make sure that optical links to/from satellites can transmit data at sufficient speed…
  • AND we can use unshielded GPUs/TPUs in space and have them survive long term…
  • AND we can operate sufficiently large solar arrays that are robust to junk strikes…
  • AND the systems to provide radiative cooling work well in space…
  • AND the payload hardware can be made light enough…
  • AND we can cope with orders of magnitude more satellites in orbit…
  • THEN we can solve our energy problems on Earth.

There are a lot of physics and engineering problems to solve. Let’s get concrete about a 1 MW data centre in space — 0.01% of the total capacity that OpenAI and NVidia committed to build together — and see what the numbers say.

GPU/TPU lifetime

According to a Google architect, chip lifetimes can be surprisingly short in high-load environments. Despite companies amortising costs over longer periods, realistic lifetimes are in the 1-3 year range:

  • Year 1: ~5-10% cumulative failures (early/infant mortality plus steady-state)
  • Year 2: ~20-35% cumulative failures (wear-out phase begins)
  • Year 3: ~50-70% cumulative failures (deep into wear-out)

These numbers are probably optimistic for space. Radiation-induced single-event upsets and total ionising dose degradation would accelerate failures beyond terrestrial rates. Without radiation-hardened GPUs (which barely exist for modern AI accelerators), we’d be toward the worse end.

A data centre doesn’t gracefully degrade — it has a minimum viable capacity below which workloads can’t be efficiently scheduled. For GPU-heavy AI/HPC workloads, this threshold is typically around 70-80% of original capacity. So functional uselessness likely arrives at roughly 18-24 months.

Cooling

This is the key point. On Earth, we use atmospheric absorption to get rid of excess heat. “Space is very cold, so it will be easier up there” — sadly, not quite. There’s very little atmosphere, which means we’re stuck with Stefan-Boltzmann laws and radiative cooling only.

GPUs are ~99% efficient at converting electricity into heat. So we need to radiate away ~1.05-1.1 MW total thermal load.

Running the cooling surfaces at 50°C gives us about 556 W/m², which means we need roughly 1,890 m² of radiator area. And coolant loop failure in space is catastrophic — if a pump fails or a micrometeorite punctures a coolant line, GPUs hit thermal shutdown within minutes. The ISS has had multiple coolant loop failures; they’re one of the most common serious malfunctions.

This is probably the strongest technical argument against space-based compute: thermal management in vacuum is the hardest engineering problem, and it scales linearly with power dissipation.

Solar energy

Solar arrays in space benefit from no atmospheric scattering (~40% improvement) and optimal orientation (~5x daily energy production vs ground). Space panels use gallium arsenide (GaAs) at 30-40% efficiency, compared to 20-25% for conventional perovskite cells — necessary because of high radiation levels and thermal cycling.

For 1 MW: Area = 1,000,000 / (1,361 x 0.29) ≈ 2,530 m² — roughly a 50m x 50m array. Mass: ~7,600 kg.

Then there’s eclipse season. GEO satellites see eclipses near equinoxes, with maximum shadow of ~72 minutes per day for about 44 days per season. Battery requirement: ~1,500 kWh at 70-80% depth of discharge, adding ~7,500-10,000 kg of batteries.

How much does it cost?

Launch costs

Our satellite weighs ~36,100-38,800 kg. Getting to GEO is significantly harder than LEO.

Falcon Heavy (expendable): ~26,700 kg to GTO for ~$150M per launch. But GTO isn’t GEO — you need ~1,500 m/s more delta-v to circularise. Our payload needs 3 Falcon Heavy launches at ~$150M each = ~$450M just for the ride up.

Starship (aspirational): Potentially $50-100M for the full payload, but still in test flight phase. These cost projections are aspirational.

Hardware costs

For 1 MW of compute: approximately ~780 GPU slots, or about 100 DGX-class systems. GPUs alone: $20-24M. Complete space-grade systems: **$50-80M**.

The 24-month replacement problem

At 18-24 months we expect 25-35% GPU failure. A resupply mission needs ~4,000-8,500 kg to GEO, including replacement GPUs, a robotic servicing vehicle, and spare coolant. And the satellite must have been designed from the ground up for modular servicing — if it wasn’t, servicing is effectively impossible.

5-year total cost of ownership

A realistic 5-year estimate for a 1 MW space data centre: $700M-1.1B (Falcon Heavy) or $300-500M (Starship, aspirational).

The same 1 MW on Earth

A terrestrial equivalent: roughly $40-60M over 5 years.

That’s a 15-20x cost premium for the privilege of being in orbit. And this doesn’t account for a single micrometeorite strike to a coolant line writing off the entire investment in seconds.

The bottom line

Even with the techno-optimist approach, there are hard limits. Solar flux of 1,300 W/m² is all we can harvest — with 100% efficient panels we reduce the solar array by a factor of 3. But the cooling array size is set by GPU thermal limits and the temperature of space, and there’s not much we can do about that.

So what are we really talking about? It can’t have escaped anybody’s attention that Elon Musk has an AI company in serious trouble and a rocket company aiming at a $1.5tn IPO on a punchy P/E ratio of 300:1. Merging these two shaky-looking prospects into “data centres in space” is arguably similar to what he did with Tesla — initially an electric car company, then it was all about full self driving, and now apparently it’s a humanoid robot company.

There’s not one technical challenge here to overcome; there are many. Full self driving hasn’t overcome the challenge of safely piloting humans with AI, even though we’re pretending it has. This time we’re looking at a bunch of problems, each just as hard, and the whole system depends on solving each one.

Don’t buy the hubris — follow the maths and physics. Even when our tech overlords want us to look the other way, the same laws of physics that apply to you and me also apply to them and their harebrained schemes too.

This post also appears on my Substack.