← All videos

Tesla Terafab 2026: $25B AI Chip Moonshot Explained for Investors

Elite Automation Published Apr 6, 2026 Added 3w ago 5:05 96 views Open on YouTube ↗

Chapters

Topic clips curated from this video. Click to jump in.

Description

Tesla just unveiled Terafab — a $20–25 billion joint venture with SpaceX and xAI to produce 1 terawatt of AI compute annually (logic, memory, packaging), with 80% going to orbital data centers via Starlink/Starship and 20% powering vehicles, Optimus, and robotaxis on Earth. Malachi Greb from Evansville, IN, breaks down this 2026 moonshot: two specialized fabs plus a rapid-iteration fab near Giga Texas, starting with 100,000 wafer starts/month and scaling to 1 million (70% of TSMC’s current global output from one site). Amid $320B foundry market and severe bottlenecks (TSMC booked through 2028, HBM growing 80% YoY), Terafab simplifies flows for aggressive redesigns. As Elite Automation CEO grinding factory hustles heavy in TSLA, I see vertical integration ending supply limits and unlocking trillion-dollar growth — but execution risks are huge. Investors: This is existential for Optimus/autonomy roadmaps.

Elite Automation SI & Services:

https://www.youtube.com/watch?v=xrNyjnTm6VM

Elite

Transcript

Read auto-generated transcript (945 words)

Kind: captions Language: en Hey everyone, welcome back to the channel where we break down the biggest Tesla and AI developments with real data and straight talk. Today, we're going deep on one of the most ambitious projects announced in 2026, Tesla's Terafab. On March 21st and 22nd, 2026, Elon Musk formally unveiled Terafab, a massive joint venture between Tesla, SpaceX, and xAI. Musk called it the most epic chip-building exercise in history by far and the final missing piece of the puzzle. The goal, produce 1 terawatt of AI compute per year. That's logic, memory, and packaging combined. About 80% will eventually go to space-based data centers with 20% staying on Earth. And with 20% staying on Earth to power Tesla vehicles, Optimus robots, robo-taxis, and more, this isn't just another factory. It's Tesla's bold move to break free from supply chain limits. It's Tesla's bold move to compete head-on with the giants of semiconductor manufacturing. Let's dig into the data on the current AI chip fab world, numbers involved, and the revenue streams. First, the big picture on AI chip manufacturing in 2026. The global foundry market exploded to about $320 billion in 2025, up 16% year-over-year, almost entirely driven by AI demand. Projections for 2026 point to another 17% to 25% growth. But here's the problem, severe bottlenecks everywhere. Leading 2-nanometer fabs cost $25 to $35 billion each to build. TSMC with 38% market share is fully booked on 2-nanometer through 2028. Their 2026 CapEx is $52 to $56 billion. High-bandwidth memory is growing 80% year-over-year and it's still the number one limiter. Samsung's Taylor, Texas 2-nanometer fab is 93.6% complete and ramping mid-2026. Intel's 18A is in early stages, but delayed. The entire industry grows output by roughly 20% per year. Musk has been blunt, even if TSMC, Samsung, and everyone maxes out, it meets only 2% to 3% of his company's needs. That's 100 to 200 billion custom AI and memory chips per year. US electricity generation is only about 0.5 terawatts total. AI chips are power hogs, and scaling to Musk's vision requires gigawatts that simply don't exist on Earth without huge grid upgrades. Now enter Terafab. Announced as a $20 to $25 billion initial investment, the project is sited in the Austin, Texas area near Giga Texas. But Musk made clear it will be far bigger than everything else combined. It needs thousands of acres and over 10 gigawatts of power at full scale. Musk described the structure precisely. Terafab will technically be two fabs, each making only one chip design. This simplifies the process flow massively, allows linear movement of wafers, and lets them redesign any rate-limiting machine on the fly. There's also a small advanced technology fab for rapid iteration. The chips in focus, AI5 for edge inference in vehicles, cybercab, and early Optimus, described as Hopper or Blackwell-class performance, and AI6 for higher-volume Optimus, and D3 chips optimized for space. They run hotter to cut radiator mass and will power massive orbital AI clusters via Starlink and Starship. Production targets start ambitious. Production targets start ambitious. Initial 100,000 wafer starts per month, scaling toward 1 million at full capacity. That's roughly 70% of TSMC's entire current global output from one US site. Annual output, 100 to 200 billion custom AI and memory chips, delivering 100 to 200 gigawatts of terrestrial compute plus a full terawatt in space. Timeline, small-batch AI5 late 2026. Cost details are eye-opening. Bernstein analysts estimate the full 1 terawatt ambition could require 105 to 126 modern 2-nanometer fabs, pushing total cost toward $5 trillion or more. Musk's own words drive it home, "We either build the Terafab or we don't have the chips, and we need the chips, so we build the Terafab." He added that Terafab can delete, simplify, or speed up steps aggressively. How does this tie into Tesla's bigger revenue picture? Perfectly. Terafab powers the multi-trillion dollar streams, Optimus, AI chips as high-margin hardware, robo-taxi networks, and energy storage. On X, reaction was explosive. Bulls called it game over for supply bottlenecks. Skeptics, including Tom's Hardware Analysis, point out the Herculean risks, execution, enormous CapEx, and power hurdles. Bottom line, Terafab is Tesla doubling down on vertical integration to dominate specialized AI inference at extreme volume and low cost. It's existential for hitting the Optimus and autonomy roadmaps. What do you think? Is Terafab genius or too ambitious? Drop your thoughts in the comments. Smash subscribe and hit the bell for more deep dives. Thanks for watching, and let's keep understanding the universe together.

Related coverage