← All videos

The Intel-Tesla Terafab Deal Explained — $25B AI Chip Factory, Why Intel Said Yes?

The Tesla Breakdown Published Apr 15, 2026 Added 3w ago 10:24 10 views Open on YouTube ↗

Chapters

Topic clips curated from this video. Click to jump in.

Description

🔴 On April 7, 2026, Intel announced it is joining Elon Musk's Terafab as the primary foundry partner — the deal confirmed after a weekend meeting between Musk and Intel CEO Lip-Bu Tan. Intel shares jumped 3% on the news. Terafab is a $25 billion joint venture between Tesla, SpaceX, and xAI targeting one terawatt of AI compute per year from two Texas facilities. In this video, Tesla Car World explains exactly why Intel said yes, what the partnership actually confirms, and why Tesla's chip demand alone could keep Intel's fabs running at full capacity for decades.

What is covered in this video:

Intel confirmed April 7, 2026 — primary foundry partner, Lip-Bu Tan + Musk weekend meeting, shares up 3%

Intel's foundry crisis — $7B operating losses, 85% utilization break-even, $15-20M/day burn below threshold

Tesla chip demand math — 1,600-3,000 chips per vehicle, 20M vehicle target, 3-5 AI chips per Optimus robot

What Intel brings — 50 years fab experience, Foveros 3D packaging, chiplet yield

Transcript

Read auto-generated transcript (1549 words)

Kind: captions Language: en On March 14th, 2026, Elon Musk posted seven words on X. Terra Fab project launches in seven days. That post set off a wave of analysis, alarm, and competitive soularching that reached all the way into Nvidia's Santa Clara headquarters. Jensen Hang said publicly he had never seen anything built so fast in his life. The thing he was referring to was not a car factory. It was a vertically integrated semiconductor fabrication complex announced formally on Tesla's January 28th earnings call and officially launched on March 21st, 2026. The question is not whether Tesla is serious. the $20 billion budget, the Samsung supply agreement, the Intel partnership discussions, and a recruitment campaign specifically targeting the world's best chip engineers all confirm it is. The question is what this actually means for Nvidia and for the global AI chip landscape. Terraab is located on the north campus of Gigafactory, Texas in Austin. At full ambition, Musk confirmed roughly 100 million square feet is the right order of magnitude for the facility. Approximately 10 times the footprint of the main Giga Texas building. The facility is designed to consolidate logic chip design, EUV lithography fabrication, memory production, advanced packaging, and testing under a single roof. At full production capacity, Tesla is targeting 100,000 wafer starts per month, scaling to 1 million at full buildout, a figure representing approximately 70% of TSMC's current total global output from a single US-based facility. The projected annual chip output is 100 to 200 billion AI and custom memory chips per year. Musk has described the target computing power output as 1 terowatt annually. For context, the entire United States currently generates approximately 0.5 terowatts of electrical power per year. The first product coming out of Terraab is the AI5 chip and its specifications are where the NVIDIA threat becomes concrete rather than theoretical. AI5 delivers 40 to 50 times more compute performance than Tesla's current AI4 chip with 9 to 10 times more memory capacity and bandwidth. It runs at approximately 250 W compared to the Nvidia H100 at 700 watts. A single AI5 chip achieves Hopper class H100 performance. Two AI5 chips working together approach Blackwell class performance. If the claim that AI5 costs a fraction of an H100 holds under third-party validation and the $35,000 price point of H100 gives enormous room for a purpose-built alternative to undercut it, then every company currently running massive NVIDIA GPU clusters for AI training has a reason to pay close attention to what is being built in Austin. The chip roadmap extends well beyond AI5. Musk announced a 9-month chip development cycle compared to the industry standard of 12 to 18 months starting with AI6, which is scheduled to reach tape out in December 2026 and enter mass production via Samsung's 2 nanometer process in the second half of 2027. Tesla confirmed a $16.5 billion supply agreement with Samsung in July 2025 to support this timeline. Samsung's Texas FAB began testing extreme ultraviolet lithography equipment in March 2026 specifically for AI5 production. AI5 targets edge inference running AI locally in vehicles and robots. AI6 extends the architecture to largecale AI training and data center workloads. AI7 paired with the D3 dojo 3 chip is mapped to space-based computing infrastructure. This is not a chip road map. It is a computing civilization road map. The D3 chip dojo 3 represents the most radical part of the Terrafab announcement. Tesla originally built Dojo to create a world-class video training cluster for FSD on Earth. That program stalled when Tesla pivoted to using Nvidia infrastructure for XAI Grock training at scale. But at the Terrafab launch event, Dojo 3 reappeared as a space-based computing platform. The logic is clear. Earth's power grid is running out of capacity to sustain exponential AI compute growth. Current global annual additions to compute capacity hover between 100 and 200 gawatt constrained by grid capacity, cooling requirements, and physical real estate. D3 resolves those constraints by operating in orbit where solar energy is abundant. Space itself serves as a thermal sink and there is no local power grid to overwhelm. SpaceX's heavy lift launch capabilities provide the delivery mechanism. The result is a computing architecture that can scale to terowatt levels impossible to achieve on Earth. Nvidia's structural vulnerability is precisely what Tesla has identified. Nvidia does not own any chip fabrication plants. Its headquarters spans approximately 750,000 square ft. It designs chips extraordinarily well and relies entirely on TSMC and Samsung to manufacture them. This means Nvidia must cue for manufacturing capacity alongside every other fabless chip company on the planet. When Musk says Tesla needs to build terraab because current suppliers cannot meet future demand even in the best case scenario, he is describing a structural constraint, not a temporary one. Building a chip fab eliminates that constraint. eliminates the 20 to 25% gross margins Nvidia layers on top of manufacturing cost and eliminates the dependency on a supplier located in Taiwan at a moment when geopolitical risk around TSMC has never been higher. The CUDA ecosystem is Nvidia's most durable moat. CUDA is the software platform most AI developers globally depend on. effectively the native language of AI engineering. Transitioning to a new chip architecture requires rewriting code bases, reoptimizing training pipelines, and accepting significant productivity risk. This software inertia is historically far stickier than hardware performance advantages. Tesla's counterargument is that it does not need to win the general AI market in the near term. It only needs to meet the compute demands of its own internal ecosystem, which includes FSD for tens of millions of vehicles, the Cyber Cab fleet, and potentially billions of Optimus units. At that volume, Tesla is its own biggest GPU customer. Serving that demand inhouse rather than paying NVIDIA margins on every chip represents hundreds of billions of dollars in long-term cost differential. The honest skeptic's case is worth stating clearly. Building a 2nanmter chip fab requires atomic level engineering precision, ASML EUV lithography machines so scarce that even Intel and Samsung must wait years for delivery and clean room environments where a single particle of dust ruins a wafer run. Tesla's 4680 battery cell program after 5 years and repeated promises still has not fully achieved its cost and yield targets. If chemical battery manufacturing has proven this difficult, semiconductor fabrication at the 2nanmter node is multiple orders of magnitude more demanding. Morgan Stanley estimates Terraab's operating costs could reach $45 billion. And until reaches meaningful production, Tesla will continue spending billions per year purchasing Nvidia chips to keep FSD and Optimus on schedule. The one-s sentence version. Over the next 1 to 3 years, Nvidia's position is secure because Tesla cannot yet replace what Nvidia provides. In the post2027 era, if Terrafab achieves even a fraction of its stated ambitions, Tesla will have built something the semiconductor industry has never seen. A single company closed loop chip ecosystem serving vehicles, robots, and orbital compute infrastructure simultaneously. Jensen Huang's comment that he had never seen anything built so fast is genuine respect. Whether it is also the beginning of a very expensive competitive problem for Nvidia depends entirely on execution. And execution, as Tesla's history shows, is always the variable nobody can predict from the outside.

Related coverage