Why Meta is Betting on Satellites to Keep its AI Running

Share
illustration of a satellite in space beaming energy down to earth.

The headline is easy to mock. Meta is going to beam solar energy from space to power its data centers.

It sounds like the kind of announcement that belongs in a science fiction novel, not a Monday morning press release. But the engineering problem it's trying to solve is real, the constraints driving it are real, and the approach, while unproven at scale, is not as far-fetched as the coverage suggests.

To understand why a company like Meta is signing deals for technology that doesn't yet exist, you have to understand what is actually broken about how AI infrastructure is powered today.

The grid problem no one wants to talk about

Building a data center is, in 2026, a relatively solved problem. The hardware is expensive but available. The cooling systems are mature. The real constraint is electricity, and specifically the process of connecting a new facility to the electrical grid.

Grid interconnection queues in the United States now run four to seven years in many regions. That means a data center that breaks ground today may not have reliable grid power until 2030 or 2031. For a company planning to spend over $60 billion on AI infrastructure this year alone, a seven-year power queue is not a scheduling inconvenience. It is an existential constraint on the pace of deployment.

Solar is the fastest-to-deploy large-scale generation option available, but it has two problems that make it insufficient on its own. The first is intermittency: solar farms produce power during daylight hours and nothing at night. The second is siting: the best solar resources are often in locations far from where the data centers need to be, which adds transmission costs and, again, grid interconnection delays.

Battery storage solves the intermittency problem partially, but not at the scale and duration that always-on AI workloads require. A data center running inference at the scale Meta operates cannot afford an outage window during a cloudy week in January. According to the Lawrence Berkeley National Laboratory's congressionally mandated report on data center energy use, U.S. data centers consumed 4.4% of total national electricity in 2023 and are projected to reach as much as 12% by 2028. The infrastructure serving that demand has to run continuously.

What Overview Energy is actually proposing

Overview Energy's approach starts from a simple observation: in low Earth orbit, there is no night. Satellites positioned above the atmosphere receive sunlight continuously, uninterrupted by the day-night cycle and unfiltered by weather. The energy available per unit area is also roughly 30% higher than on the ground because there is no atmosphere attenuating it.

Overview's system collects that solar energy in orbit, converts it to near-infrared light, and beams it down to existing terrestrial solar farms as a low-intensity, wide-area beam. The ground-based solar panels convert the beam to electricity using the same photovoltaic equipment they already use for sunlight. The result is a solar farm that can generate power around the clock without battery storage and without any new land, new grid connections, or new receiving infrastructure beyond what already exists.

The 1 gigawatt capacity reservation Meta signed represents a meaningful commitment. For context, 1 GW is roughly enough to power 750,000 average American homes. It would represent a significant fraction of a large-scale AI data center cluster's power needs.

The honest assessment of the risks

The technology works in principle and has been partially demonstrated. Overview says it has successfully transmitted power from an aircraft to the ground, which validates the beam-to-panel conversion step. The part that has not been demonstrated is doing this from actual orbital altitude at meaningful scale, which is what the planned 2028 satellite launch is designed to test.

The risks are real and worth naming. Launching and operating a constellation of energy-transmitting satellites is operationally complex and expensive in ways that are difficult to model accurately before you have done it. The Huawei Ascend chip delays in DeepSeek's V4 release are a useful reminder that supply chain and manufacturing constraints have a way of stretching timelines even when the underlying technology is sound. Overview's commercial delivery target of 2030 assumes the 2028 orbital demonstration succeeds on schedule, which is an optimistic assumption for any first-of-kind space system.

There is also a regulatory dimension. Beaming energy from space to Earth is not something that existing regulatory frameworks cover cleanly. The FCC, FAA, and potentially international bodies will have opinions about orbital paths, beam characteristics, and interference with existing systems. Overview's choice of near-infrared light over microwave or high-power laser specifically addresses the safety and regulatory concerns that have historically blocked space solar concepts, but regulatory approval is still an open question.

Why this is worth watching anyway

The more interesting thing about the Meta deal isn't whether Overview Energy's specific technology succeeds on its 2030 timeline. It's what the deal signals about where the AI infrastructure industry is headed structurally.

Hyperscalers are moving from being customers of energy infrastructure to being investors in it. Meta's simultaneous deal with Noon Energy for ultra-long-duration battery storage, Alphabet's $4.75 billion acquisition of Intersect Power, and Microsoft's nuclear power agreements all reflect the same underlying logic: the grid cannot be relied upon to deliver power at the scale and speed AI deployment requires, so these companies are building parallel energy infrastructure outside the grid.

For engineers working on AI infrastructure, that shift has practical implications. The power constraints that determine where data centers get built, what hardware they can run, and at what utilization rates are increasingly being set not by local utility agreements but by the energy strategies of the hyperscalers themselves. Understanding that layer of the stack is becoming as relevant to infrastructure engineering as understanding networking or storage.

Whether Meta is ultimately powered by satellites, nuclear reactors, or some combination of both, the underlying point is the same. AI at scale is an energy infrastructure problem as much as it is a compute problem. The companies that solve the energy constraint fastest will have a structural advantage that compounds over time, regardless of which model architecture or hardware platform wins the benchmark wars.