The Truth About SpaceX's "Orbital Datacenters"
Chapters
Topic clips curated from this video. Click to jump in.
Description
A full review of SpaceX's AI space datacenter concept technicals and economics.
Link to video spreadsheet/scripts: https://github.com/noiseinspacechannel/NIS-Orbital-Datacenters-Video
00:00 Intro
00:30 Space Datacenter Overview
03:40 Power Cost Analysis
07:35 Cooling and Thermal Models
10:36 Total Cost of Ownership
13:00 XAI/SpaceX vs. Frontier Labs
16:05 SpaceX vs. Hyperscaler Clouds + Nvidia
18:53 Terafab
19:48 Scalability w/ Starship
21:35 "Surveillance Sat" and Tesla Car Datacenters
23:25 Conclusion
Transcript
Read auto-generated transcript (5556 words)
Kind: captions Language: en Elon Musk recently announced the AISAT Mini, which he claimed would revolutionize our AI compute capacity within two to three years. However, after doing the analysis for myself, I can tell you that while these satellites will be cool, they're going to be dead on arrival in the competitive AI inference market. As always, my visualizations are all posted on my GitHub page. And this time, I also included my financial spreadsheet that I used. If you disagree with any of my assumptions, you can replace them and do your own objective analysis and give us the results in the comment section. Let's start with the basic premise of space data centers. Out in space, the radiance from the sun, which is the power per unit of incident surface area, is 1,361 watts per square meter. If your orbital plane is perpendicular to the direction of the sun, or if you are at a sufficient altitude, then you can harness this full solar power all the time. However, for typical low Earth orbit altitudes and inclinations, you'll spend nearly half of your time in eclipse behind the Earth. If you choose a polar orbit at 90 degrees inclination flying above the dawn dusk terminator, the relative position of the sun will shift throughout the year and you will fall into eclipse once again. The best solution is a dawn dusk sun synchronous orbit which uses inclinations between 97 and 105° to produce a very small nodal procession with perturbations that maintains the ideal geometry all year round. At 500 kilometers altitude and minimal deviation, you have near persistent sunlight throughout the year except for near the summer solstice where your satellite will be an eclipse up to 22% of the day. At 1500 km altitude, you can achieve full sun for the entire year. And this is why SpaceX lists altitudes up to 2,000 km in their FCC filing. One caveat is that the inner Van Allen radiation belt begins at 1,000 km and 2,000 km orbits have 1,000 times higher radiation than 500 km orbits. So higher altitudes may not necessarily be the best trade-off when we compare groundbased solar to the sun's synchronous orbit. There is a significant average power disadvantage from eclipsing during local night, the atmosphere, and the incidence angle. Solar panels can be configured to track the sun east to west throughout the day and they can also be in a fixed tilt or tracked north to south. The overwhelming majority of modern solar farms use one axis east to west tracking. Power generation of the terrestrial panels differs based on the season. During summer, the ground panels can produce over half the power generation of the space panels. However, this is also monsoon season which can have periods of rainy weather. We will stick with the winter numbers as a worst case and this gives us around five times more power produced by the sunsync satellite than the flat single axis tracking ground panels. In this specific case, you would actually be better off with fixed panels tilted for winter or fall. You could sell the excess capacity during the other seasons, but we will ignore that for our analysis. We will also assume we need enough battery storage for constant output throughout the day. And in winter, this requires over 14 hours without the panels generating electricity. You could sell the excess battery storage in summer, but we will also ignore that for our analysis. The seasonal problem would be partially solved with equatorial solar farms. However, the wet equatorial weather would be unacceptable because solar cells produce virtually no power when there are rain clouds overhead or when they are covered with snow. So, I will use southern Arizona at 32° latitude for my analysis. A second advantage of this orbit is that you do not need large batteries as you would on Earth or on a lower inclination lower Earth orbit satellite because you have nearly constant power and limited power cycling. Even in the summer solstice case, the eclipse will only last 20 minutes instead of 12 hours. So for a best case scenario, we will just assume no cost for the satellite batteries. At the ground site during winter, however, the local night will last over 14 hours requiring significant battery storage. The benefits for space solar power end there, however, and we need to include these other factors in the cost equation as well. We're going to go beyond my usual napkin math and do an actual cost analysis in this video. So, I'll take you through each of the key assumptions for current 10-year out and 25-y year out scenarios. First, we have our five times solar resource assumption, and I assume 10% further reduction due to weather and sand. We consider the battery storage needed based on how long night lasts in winter. For the cost of the panels themselves, traditional satellite panels will often be hundreds or thousands of times more expensive because of the higher performance and protection against all of the factors during launch and on orbit. But we can assume SpaceX has lower costs since they mass-produce Starlink panels. I still cannot imagine the cost being cheaper than a terrestrial solar farm's panels and installation. So, I'm setting the assumption at 50% higher than solar farm construction costs per unit of panel area. Next is a very important assumption which is the useful life of the space solar panels. This is not because the satellite will de-orbit since at these higher altitudes the satellites could operate for decades. Instead, this is considering that the AISAT Mini will be connected to GPUs and other processors that will depreciate and generate nearly zero revenue within 5 years. As an example of this, we can compare the stats between the H100 and GB300, which came out just 3 years apart. When we look at the base stats, they seem pretty comparable. However, there are all kinds of improvements to the architecture, memory, networking, and the system as a whole that make the old hardware obsolete for running the newest, largest models. When we look at the inference benchmarks from semi analysis for DeepSeek R1, which is small compared to the newest Frontier models, Blackwell Ultra totally destroys Hopper. The tokens per GPU at the same speed can be over 30 times as high. So to solve this problem, they need to launch new GPUs into space every few years to replace the old ones and utilize the power at full revenue. On Earth, you can just swap out the GPUs and servers with the same power and cooling system. Finally, we will assume the battery storage required for the sun synchronous satellite will be negligible in terms of mass and cost. For the rest of these assumptions, I had an AI research and source projections from across the internet. So take these with a grain of salt and feel free to adjust them as needed. For our results, the firm storage solar power is relatively expensive versus other types of power. And when we ignore launch cost, the space solar power is by far the cheapest. However, with current launch costs, it is at the bottom of the list. Falcon 9 currently costs SpaceX around $1,400 per kilogram to sunsynchronous orbit. The goal of Starship is to get the cost down to $200 per kilogram, and this will require making Starship as inexpensive as Falcon 9 per launch. I am a big fan of Starship and I think it will get there over a longer time frame. But for the next 5 to 10 years, the Starship upper stages are going to require lots of refurbishment and there are many billions of dollars and many years of development still required. The next result is the break even launch cost for various comparisons. This represents the launch cost required to have an equal levelized cost of energy and this is excluding the launch costs for the rest of the satellite. If we were only considering energy costs in the 10-year modular scenario, this would break even compared to firm storage solar with Falcon 9 launch costs. At $200 per kilogram launch costs, the satellites would have a significant electrical power cost advantage. However, electricity is only a tiny fraction of data center total cost of ownership. Here I show some charts from a book published in 2013 when CPU servers were dominant and you can see even in the worst case scenarios, the power is only 10% of the TCO. In 2026, AI data centers use much more expensive GPUs, custom accelerators, networking, and cooling systems, and the capital expenditures are so high that the power is well under 5% of the data center total cost of ownership. You will also notice that there is no water bill on these charts, and the reality is that AI data centers do not use as much water as most people think. Now, let's compare the rest of the data center system and total TCO between ground and space data centers. First, we have cooling. This has been the most controversial subject with these space data centers, but is also probably the least important. You can transfer heat through radiation, convection, and conduction. In radiation, photons transport energy through free space. The amount of power radiated by objects in space is proportional to the surface area, the emittance, and the absolute temperature to the power of 4. The power absorbed equals the incident area time radiance times the absorptance. The emittance and absorptance are generally the same value, but they do depend on the wavelength of light. In conduction, heat transfers within a material or between surfaces that are in contact with one another. The heat transfer is proportional to the difference in temperature across the materials and inversely proportional to the length the heat must travel. In data centers, heat conducts from the chip and package into larger heat sinks. For convection, heat is exchanged between a solid surface and a surrounding fluid. And this is how data centers cool off the heat sinks with air cooling and liquid cooling. The heat transfer in convection is proportional to the difference in temperature between the fluid and the solid surface. You may have seen some analysis like this where they calculate how large the radiator should be based on the temperature of the chip. This model is a good approximation, but in reality, the conduction and convection do not work instantly and the chips can be hotter than the radiator at equilibrium. So, it is better to do thermal modeling. Based on the AISAT mini mockup image, I created a 3D model of it and modeled the cooling system as a copper heat sink with 100 1 W chips on it connected to a large solid copper radiator. Based on my model, the solar panels would generate 260 kW, but I'm just showing 100 kW of waste heat from the chips here. The satellite also receives significant thermal energy through the solar panels, but they can be thermally isolated since the back plate of the panels can radiate with the same amount of thermal power as is received from the sun. With a passive radiator, you can see that the chips run too hot at over 100 Celsius. So, active cooling is a better option. This can be done with liquid cooling that will pump coolant between the heat sink and the radiator. An even better option is active heat pumps, which would refrigerate the chips and heat sink by heating up the radiator. The takeaway here is that the cooling would be practical, but the complexity and costs would be comparable to terrestrial data center cooling. With launch costs included, it's questionable if this would save you any money. Just for fun, I made a model of Starlink version 3 without the DTC antenna, and you can see why they do not need an external radiator. The waste heat from the antennas is much lower than the chips. The satellite is an eclipse for half the time, and while the bus is tracking the Earth, the solar incidence angle will be offset from the ideal angle. Moving back to the AI set mini design, you'll notice that I simply spread the chips over the surface of the bus. In traditional data center racks, the trays are stacked on top of one another, but if you laid them flat, their surface area would cover just under the area of one face of the AISAT mini bus. A terrestrial rack is 1.5 tons in mass, roughly equally divided between the structural and cooling and the essential IT hardware. Since in space you need a large radiator, and the structure would still need to be somewhat sturdy to survive launch, the total mass would be surprisingly comparable. Now we will calculate estimates for the total cost of ownership. And I will be assuming a 100 kowatt space equivalent of the NVL72 rack on a satellite compared to actual NVL72 racks in a hyperscaler cloud data center. These have very similar cost structures to SpaceX's Colossus data center and we will be assuming the equivalent of these racks over time. First we have our mass assumptions which will affect launch costs. I also assume the bus and cooling will be permanently attached to the IT hardware which will be true unless we have some crazy AGI space manufacturing robots in 25 years. This also assumes the pace of hardware innovation continues such that the depreciation is still 5 years. Next up is the most important assumption which is the cost of the IT hardware. I used an estimate of 30% higher price per watt. That would include all of the extra development to compensate for the vibration during launch and the radiation, thermals, and vacuum of the space environment. I chose to steadily increase the price of the AI hardware over time, which contrasts all of the other costs. This is a developing trend in the silicon industry. Whereas Moore's law has slowed down, the capability per watt has still been steadily improving and the cost per watt has also been increasing for the cutting edge process nodes. The reason I expect this trend to continue is that the revenue per watt to the users has been increasing even faster than those costs. Next, we have the costs of the data center and the bus components. Each AI satellite will need subsystems for power, cooling, communications, attitude control, and propulsion, and these will be amvertised over the useful life of the IT hardware. These are usually very expensive, but SpaceX has highly optimized mass production lines for Starlink that will make these costs a drop in the bucket compared to the cutting edge AI accelerator hardware. For the networking, this system will take up some of the back haul bandwidth from Starlink. However, if we assume we are outputting text tokens with our AI models, then this is somewhat negligible. If you assume the system will be scaled up or if you think it will do some AGI video type use case, then you can adjust this. For launch costs, I assume a steady decrease towards $200 per kilogram over 25 years. Based on my assumptions, the terrestrial data center had lower TCO per watt in every scenario. You can try to change the assumptions around to give the AI satellite a slight advantage. But whatever you change, the cost of the IT hardware will always dominate TCO at lower launch cost assumptions and the price of power becomes irrelevant. So the real question we need to answer is whether SpaceX can compete with the Frontier Labs, the Hyperscaler Clouds, Nvidia, and even TSMC. And this will come down to the real meat and potatoes of the revenue and profits. The lore of Frontier Labs goes back to 2015 when Elon Musk founded OpenAI as a nonprofit to compete with Google DeepMind. He later went on to leave the company to compete in AI with Tesla. And after the release of Chat GPT, OpenAI converted into a massive successful for-profit company. This is the basis of the ongoing lawsuit between Elon and OpenAI. In March 2023, the same month GPD4 was released, he founded his own AI lab, XAI, and they have still not managed to catch up to the frontier labs in coding, image and video generation, consumer and enterprise chat bots, or really any major revenue source other than searching and explaining posts on X. As I record this in May 2026, OpenAI, Anthropic, and Google have large advantages in performance, users, and revenue. XAI is even trailing behind openweights Chinese models in most benchmarks. Because of this, XAI was unable to raise the tens of billions of dollars per year that it was burning to try to train competitive models. And in February 2026, the company was bailed out by SpaceX. This also coincided with the first official plans for space-based data centers by SpaceX. Shortly afterwards, SpaceX filed for an IPO to raise more money, likely for compute hardware and the unprofitable AI research at XAI. Interestingly, he previously stated he would not IPO SpaceX until the original goal of settling Mars was reached. And after the merger and IPO decisions, he announced the company was shifting away from the original goal of settling Mars. He also used XAI to secure a $1 trillion pay package from Tesla, claiming that he was uncomfortable growing Tesla as a leader in AI without owning more shares. This was likely a reaction to Sam Alman being temporarily ousted as CEO at OpenAI. Despite his Tesla pay package being approved, he chose to purchase XAI with SpaceX instead of Tesla, which will guarantee him full control. Anyways, in Elon's own words, XAI has some catching up to do, and it will take a miraculous breakthrough to achieve a pricing advantage over the Frontier Labs. Assuming XAI loses this race, if you have inferior models, you cannot simply charge slightly less and get the same revenue as the other labs. For example, Claude Opus is only slightly better than Claude Sonnet, but they can price Opus at double the price of Sonnet, and the vast majority of their revenue will still come from Opus. There are also other factors like brand reputation and product stickiness that will give the Frontier Labs an advantage over Grock. If SpaceX and XAI make an unlikely breakthrough and catch up and surpass the other AI labs, they can then charge more per watt and make a profit even with the inferior space data center economics compared to cloud providers that OpenAI and Enthropic have to pay a margin to. Even if SpaceX achieves an advantage, they also need to deal with the fact that OpenAI and Enthropic are burning tens of billions of dollars per year and are willing to operate at a major loss to grow faster. So the growth related losses and risks of competing with the frontier labs will need to be justified in the post IPO SpaceX organization. Also this section and the following section do not apply if you consider Google Deepmind the main competitor since they are their own cloud provider and they train in inference Gemini with their own custom silicon TPUs. I see it as unlikely that XAI can outright win like this against the labs. So let's move on to the next option which would be providing inference compute to the frontier labs as a cloud service. XAI has already started to use their idle terrestrial NVIDIA GPU capacity for this purpose with the cursor deal and they just announced a deal to rent out their entire 300 megawatt Colossus 1 data center to Anthropic. But as I have shown, creating space data centers with modified versions of the same hardware as on Earth is not economically viable. Consequently, they have no plans to collaborate with Nvidia on space optimized chips. The hyperscalers also have tremendous existing power and facility capacity that they can install new generations of chips into much more efficiently than SpaceX, which needs to start with new builds on every project. To compete with the hyperscalers and Nvidia, they will need to produce significantly cheaper and better chips than Nvidia. And it seems like the plan is to attempt this with Tesla's embedded AI chips. This is no easy task, however, and there is a reason why Nvidia made $17 billion in earnings last year. When we look back to our semi-analysis benchmark, you can see that even AMD's current highestend AI chip gets destroyed by Nvidia's. The current Tesla automotive chips would likely not even be able to run Frontier AI models and would not even show up on this chart. Google and Amazon do not publish public benchmarks for their custom silicon, but since they are more tailored for specific workloads and they do not have to pay a margin to Nvidia, it is more likely that they are competitive. Nvidia has a net margin of just over 50% and gross margins at 75% which basically means you need at least a quarter to half the price to performance of their chips or else your custom silicon will lose you money. This is why Tesla canceled their internal Dojo 2 chip which underperformed Nvidia even on their full self-driving training workload. For the AI satellite chips to be profitable, even without paying the margins for clouds or Nvidia, SpaceX would need to have nearly equal price to performance versus Nvidia, assuming all other cloud hosting costs were equal to the hyperscalers. So to make this plan work within 3 to 4 years, as Elon claims, they will need to make the Tesla AI5 chip, which will come out in 2027, competitive with Nvidia's Reuben generation, and they need to design it to operate efficiently in space. and they need to come out with competitive chips every year at the same pace of innovation as Nvidia. I'm sure these Tesla chips are the best robot and automotive embedded chips in the world. But if he is only claiming AI5 is hopper class and is coming out late 2027, I just don't understand how these chips could be remotely competitive for a Frontier AI, especially if they are not designed for rack level scaleup and they are on a satellite out in the Van Allen belts. When I compare what we have seen of that chip to Nvidia's extreme disagregated workedorked system to run ginormous AI models, the Tesla chips are seriously lacking. When I compare to Google's TPU, I have it basically the same conclusion. It's hard to say when Tesla and SpaceX will catch up, but they need to solve this hardware problem long before they can build profitable AI satellites. If they cannot quite compete with Nvidia, then the final option is to vertically integrate every piece of the stack with the Terrafab. Logic and memory manufacturers like TSMC and Micron that I show here can also have significant margins. So if SpaceX can create competitive process nodes, they will have some extra margin for the chip performance and the AI data center concept as a whole. The Terrafab concept is really interesting, but I don't understand how they will produce more chips than TSMC on Intel's 18A and 14A process nodes. Nearly every other chip company has moved to a fabous model, including Intel products, which use TSMC tiles on several of their CPUs and GPUs. So, there are some strong economic incentives for SpaceX and Tesla to stay fabulous. They are in the very early stages. So, we will see where this project goes, and I would like to do a deeper dive in another video. Anyways, assuming they could get to par on cost with TSMC and memory manufacturers and the clouds, they would still need competitive chips compared to Nvidia. Now, let's take a look at scalability. XAI already has around 1 gawatt installed at the Colossus data centers, which is just a tiny fraction of the power capacity added in the US every year. By looking at this chart, you can also see how the claim that we are running out of power or power constrained is total nonsense, especially since the IT hardware spend to use that thin sliver of power is tens of billions of dollars per year. There are terrestrial solar and battery storage projects of this size in the southwest United States and a total of 40 gawatts installed last year across the country. China is adding even more at 300 gawatt of solar every year. Interestingly, Tesla has a solar business, but they installed only a tiny fraction of the US capacity last year with their high-end residential solar business. If energy was more of a constraint than IT hardware, Elon would be scaling this business to gigawatt class utility solar instead of building the Terraab. For the SpaceX AIAT, I assume 60 kW per ton, which is just under Elon's estimate of 100 kows per ton. And to launch a single gigawatt of space power, this would require 166 Starship launches or 1500 Falcon 9 launches. This amounts to double the payload of all Falcon 9 launches in history. So that would be a significant undertaking that is likely 10 years out when Starship rapid reusability is fully solved. Just to match the solar capacity added by China this year, that would amount to 10,000 Starship launches. 10,000 Starship launches in a year would mean 27 per day and would have a tremendous environmental impact. This would be more than enough capacity to build a nice little city on Mars and to start colonizing the rest of the solar system. So the question here is should we spend more of that launch capacity on exploration or on inefficiently generating AI slot videos. Okay, at this point in the video I think you get the point that these satellites will not be viable for AI. So now let's talk about what would be viable. The simplest way to make money is to simply compete in a different market than the extremely competitive AI token market. In my last video, we discussed using Starlink as a persistent radar surveillance and signals intelligence system. The Starlink backhole capacity is not high enough to send real-time multi-channel raw radar data down to the ground. However, upgraded laser links can be used to transport this data from Starlink, commercial satellites, or military satellites for image processing and AI analysis. This compute and the resulting products could be sold at an enormous markup compared to compute used for generating AI tokens at market rates. In fact, the space AI compute startup StarCloud has already pivoted to this as their primary business model. And based on this image, it appears like they want to use Starlink's backhole capacity anyway. NMIA CEO Jensen Wong also gave an interview where he framed this concept as basically the only reason to create AI satellites. As I show here, a markup of just two times the Frontier AI compute revenue on a per watt basis would transform the so-called AI SATs from unacceptably unprofitable to generating billions in profit every year. The same argument applies to Starlink where Starlink would be more profitable per kilogram than the pure AI satellites and would create an opportunity cost of using Starship capacity for launching AI satellites. There is also the opportunity cost of not launching customer payloads for well over $200 per kilogram. So this is the simplest explanation for why the AISATs will be built in addition to justifying the SpaceX XAI merger and the eventual SpaceX Tesla merger. Another really interesting option is to use the embedded AI chips within Tesla's cars and Optimus robots to run smaller models in a cloud service. In this case, the only cost would be electricity and networking. So, this could subsidize their entire fleet of robots and make the AI satellites redundant for small models since they would be using the same chips. This idea has its own set of problems, but I think they could make it work. Okay, so I'm going to end it there. I could go on and on about all the drama surrounding this, but I think I've really made my point. And my final takeaway is that these are not going to be viable for AI anytime soon and is most likely an actual project for space surveillance and then also possibly just a big marketing stunt for the IPO. But anyways, you guys let me know what you think in the comments and let me know if I overlooked anything or if you agree or disagree. Also, make sure to like and subscribe and check out my other videos and share with your friends. And with that being said, have a great