← All videos

EXCLUSIVE: Elon Musk Announces Tesla’s BIGGEST Product Yet | Phil Beisel

Curious Pejjy Published Mar 22, 2026 Added 4w ago 1:13:10 6K views Open on YouTube ↗

Description

Phil Beisel X: https://x.com/pbeisel

Patreon: https://www.patreon.com/Curiouspejjy

I Keep Buying Tesla Stock" Shirt - https://curious-pejjy.creator-spring.com/listing/i-keep-buying-tsla?product=369

Cybertruck Shirt - https://curious-pejjy.creator-spring.com/listing/i-ordered-ct?product=369

"I Bought The Dip" Shirt - https://curious-pejjy.creator-spring.com/listing/i-bought-the-dip-2023?product=369

Twitter (X): https://twitter.com/CuriousPejjy

"GO ALL IN" Merch: https://curious-pejjy.creator-spring.com/

Become a channel member: https://www.youtube.com/channel/UC-qg1WyA2FK45-fGcrZsYjg/join

Get your notes/popcorn ready because this is about to get interesting!

NOTE: This is only a prediction and NOT facts.

Give a LIKE if you enjoyed and don't forget to subscribe!

00:00 Trailer - What's To Come

1:06 Intro

1:52 Tesla Terafab - What This Means

7:38 How Fabs Are Built

8:54 Wafers

14:30 Why "Tera"fab (How BIG This Fab Could Be)

19:28 How Long Will Building A Terafab Take? (& How

Transcript

Read auto-generated transcript (10000 words)

Kind: captions Language: en Elon Musk just confirmed that on March 21st Tesla's going to build a terraform. >> The pinch point gets around 2029 2030 where there aren't enough chips to satisfy Tesla plus SpaceX products or actually just Tesla products alone. Optimus will be the real driver of uh most of the chip consumption before the decades out. So the only choice he has is to go build a fab. This will be a joint venture between today's Tesla and today's SpaceX. We're talking about a facility, at least if you think about it as a singular facility to do 100 million. You're talking about something around 50 to 60 million square feet. So 5x 6x the sizing. It's not going to be I mean this is not going to be there's no >> is Elon's biggest challenge yet. >> This undertaking of doing terapab is really bigger than almost the things he's done already combined. >> Won't this take at least a decade? And won't this take at least I don't know how many billions of dollars this is going to take but is this going to take a long time. >> Yeah, I would I would suggest that this is a minimum of >> Welcome back everybody to another episode of this podcast with one the one and only Phil Bell. For those of you guys who don't know who Phil Bell is by now, I don't know what you I don't know what you guys are doing. He's an ex- Rivian. He made the software of Rivian. So he knows a he knows a thing or two about these things about software about these AI stuff the tech and uh reportedly Rivian is now trying to enter the FSD space with the software that you built. I recommend everybody to follow Phil because he has some very interesting insights. His exhandle is on the screen as you guys can see and link in the description. But Phil, it happened. It happened. We predicted it. We said it three months ago. And here we are. Elon Musk just confirmed that on March 21st Tesla's going to build a terraform. >> Yeah. >> I mean, it's happening. And the crazy part is it's happening. He didn't say next. He He didn't say next year or six months. He said in seven days in a week. What does that mean? What does this mean? What's going on here? Like oh my god. This is like the last I think major bottleneck to Tesla scaling massively. >> Yeah. Well, I actually don't know what seven days means really, but let's let's let's at least let's at least talk a lot about Terra Fab. Yeah, we saw this coming. It was uh surfaced most publicly at the 2025 shareholders meeting where Elon stood up and said, you know, look, we really have no choice. We have to build a fab. I mean, it's it's it's an interesting thing. In some ways it's incredibly obvious because if you look at the production of these uh larger scale chips you know we're you know fabrication of chips there's many types of chips but when we're talking about chips of this class we're talking about like Nvidia and Apple SOC's the big stuff u that that that really take up a lot of the silicon and of course the AI class chips that that Tesla produces is of that of that variant. So you know if you think about it I think in 2026 Taiwan semi does about 90% of this class of chips in the world and the rest of the 10% is Samsung downstream and I believe that their total output for 26 will be about 60 million of these type of chips. All right. So Elon's looking at this and he's thinking 60 million is a very small number for me in the big scheme of things, right? Um he now has four consuming entities for these chips. He has FSD which of course includes two per car two per car manufactured. I mean it's a good way of thinking about it is for every car that Tesla manufactures there are two today two a AI4 chips in that car and tomorrow meaning you know plus one year or AI5 chips so two per car and then he has 8 million there are 8 million Teslas on the road today and that he you know with robo taxi uh production meaning cyber cab production this number is going to go up appreciably so you've got that problem consuming chips and that's the smallest of of the bo of the box. You've got Optimus and I claim there are two chips per Optimus as well. And um some could debate that but why do we have two chips in every Tesla? Because of fault tolerance. We need that because we want the system that is driving through intersections and avoiding other vehicles and pedestrians to survive in case there's some problem on chip one, it can fault over to chip 2. And Optimus would really be no different because Optimus will be doing mission critical tasks eventually like surgery. I could see that it is possible that they could build a one chip variant, but ultimately I think there's two chips for the same reason. Then we have the entrant of space-based data centers which is going to be a big deal. Um consuming, you know, millions upon millions of of AI class chips. I, you know, I don't know particularly whether those would be AI6 or AI7. I know he's talked about that a little bit, but let's leave leave that out as who cares because it's it's a bit out, you know, out there. Uh it's certainly one of the AI class chips that will be I think he's talked about AI7. And then we have this new entrant digital optimist. This is an in inferenc that the type of inference required is in the compute class of these AI AI45 chips. So, not only would you use your automobiles, you know, your your your dormant Teslas, your Teslas sitting in your garage as distributed compute, but Elon claims that, you know, we'll be using superchargers as distributed inference centers. So, so now there's another pull on these chips, right? So, we've got we've got four, you know, four customers, I guess you could say, two on the Tesla side and two on the SpaceX side, or maybe three on the Tesla side and two I mean, ultimately, it's going to be all on the SpaceX side because this this this is leaning in towards a big merger of these companies. But >> yeah, so so you know the problem here is that if you look at what Taiwan Semi is doing and Samsung and you look at how fast they're growing, they're not growing fast enough for, you know, if Elon wants 100 million chips and that might be conservative on his in his total, you know, run here. He can't get a 100 million chips out of Taiwan sent me in a year. It's impossible. So, I think the um I did I did a post back when and Elon made a comment on it and I said, you know, the pinch point gets around 2029, 2030 where there aren't enough chips to satisfy Tesla plus SpaceX products or or actually just Tesla products alone because of the of the volume of Optimus expected at that time. that Optimus will be the real driver of uh most the chip consumption before the before the decades out. So this is extraordinary. And now uh building a fab I think we all know it's literally one of the hardest things you can do from a manufacturing. Yeah. I mean the way the way fabs are built is they're built typically as I mean obviously they house these incredibly large lithography machines that etch out uh the sil on the on the silicon wafers. You know they start with silicon wafers. Um and these wafers are you know they're multi- they're etched with chips and depending on the size of the chip that's how how many you get. Well, the size of the chip and then the yield. We can talk about that in a minute. And then these these wafers, you know, they must be posted in a clean room environment. You know, basically the air inside a fab is about a million times cleaner than in a surgical center in a hospital. >> So now you've got this this problem which is that the reason the these fabs cost so much is that they have to be environmentally isolated. you know that not only do they need to be clean room approach, but they also to be sort of earthquake safe or ground movement safe. And there's a bunch of things that affect the ability to get the yield. Um, and it is all ultimately about yield throughput and yield. Actually, maybe we should talk a little bit about that. You know, let's just talk about the wafer idea. You know, you you've got these large wafers um and depending on the size of the chip, the lithography machine will etch out a number of these and a full imaging of ultraviolet light that creates that etching is called a reticle. You know, it's like a unit of of imaging. And what you want to do for your chip is you want to fit it within one reticle because in other words, if it if your chip was beyond and that's about uh I forget the number here. I did write it down because it is probably worth saying somewhere. So, think about this. An Nvidia class chip like uh and these are monster chips like the Blackwell is an 800 square millimeter chip and the reticle is about 858. So, it it's it's literally fitting within very closely within the dimensionality of that's like 26 millimeters by 33 millimeters. That's 858 millimeters. the chip is about 800, right? And so when you have uh a blackwell environment, you actually have two chips that that interconnect because that's how it operates, but each are image separately, right? And by actually putting it within one reticle. I mean, in other words, you can't bleed over that. You can't bleed over that area, right? You can't build a 900 square millimeter chip without a bigger lithography machine, which I guess doesn't exist. In other words, at this at this point, that bounding thing is about 800 um square millimeters. Ultimately, what you want to do is so now you you etch these chips. Uh you go through the entire layering process of, you know, it's sort of like a like a layers of cities that are ultimately laid down here. Uh because they're dense uh you know, they're they're obviously got some some height to them, so to speak, in terms of multi-layer highway of of of circuitry. when you when you get this thing out, you know, you cut out the die from the larger wafer and you you test it, some fail. They just fail and they have to get tossed. And what they normally do is they level them out. Like if you go back to the CPU days and you you think about CPUs done at like, okay, this is my 3.2 GHz CPU and this is my 3.0 and this is my 2.6. That's all about yielding. It's it's like basically what you do is you you start and you test the chips at the highest gigahertz rating you you expect them to run at and only a small number past that and you you know you hold those aside and you say okay those are my 3.2 or whatever and then you retest the batch at a lower gigahertz rating and you get another yield and eventually what you do is you you know you bucket them you got your from a you know from your marketing perspective you want maybe three three classes of 3.2 3.0 2.8 whatever and then you and then you'll have some chips that just don't work and they're thrown away. So that that quantifies yield. So what what Elon wants to do with AI, let's take AI5 for example. AI5 is what's called a half retical design. What it means is that when the lithography machine images it, he's talking about two chips per image. Okay? So hopefully he gets a yield of two, but he's produ producing two chips per per imaging cycle of that, you know, onto that wafer like, you know, think of it as like a grinning it out. Um, and it's important to fit you want to fit like you don't want to be threequarters of a reticle because it's it's basically you have two choices like you're either producing two chips or one chip here in a sense of these of these more complex circuitry trip chips um higher transistor chips. So, so in the case of AI5, you know, he wants to he he obviously wants to get yield up as high as possible because you go through the process, I mean it's one piece of one wafer uh a multi- images. If you can get two per image and you get good yield too, then you get a lot of chips, right? If he if he fit that, if it was like a one reticle design, he could only get one per image and so you the yield would be lower just by dimensions. So the goal is like I remember when he talked about AI5 is he said well with AI5 we're gonna we're gonna we really you know we made this a better class inference chip than AI4 uh you know you can you can quantify what that means but they also did things like they it had a a core GPU section. It had a core IMI image signal processor. They tossed those overboard. They got rid of those things because they wanted to produce the densest class inference chip in a half reticle design. Okay? Uh because you know, think about it that way. If he went slightly over that half reticle, he would end up with about literally half the number of chips. Uh and that hurts more than not building the fab. So, so, so now if we zoom out stuck with this problem and the problem is even if I was exclusive customer at Taiwan semi and you know let's think about the yield in 2026 of 60 million plus chips you know >> I still want more I still need more I still need appreciably more appreciably more not just a little bit so the only choice he has is to go build a fab and >> and I also did some other interesting math. This is fun. >> So why a ter why does he call it terapab? Well, you know, he's labeled things giga for a while, right? Gigafactory, which is the unit of billion and terra means the unit of trillion. >> Trillion. Yeah. >> So it's yeah, it's a thousand times bigger than the than the the giga variant. And um you can design why you can infer why he calls it you know terapab. But irrespective of that, I did some math on sizing. And so let's talk about the sizing of these fabs today. It takes to produce um let's see if I got these numbers right. I believe for for Elon to get a yield of 100 million, this is sort of like let's call his low end. But, you know, if he if he turned on Terrafab and got 100 million chips in a year, I think he'd be a pretty happy >> happy guy. That would require somewhere between 14 million and 17 million square feet of clean room space. >> Yeah, it's it's a big space. Now, >> when we say clean room, we need to describe that a little bit. in a fab portion of the space is clean room because we need absolutely clean air and for the for these wafers and the whole entire processing of them and a clean room is kind of known as the ballroom space. It's like the exclusive portion of the of of the total space and it it is about usually give or take 1/4 of the space. So, in a typical fab, if Elon wanted a typical fab, so we're going to talk about that in a second, but if if he wanted a typical fab, he would need somewhere between, let's roughly say 15 million square ft of clean room space to run 100 million chips in a year. And if based on total sizing, you know, a fab usually has a clean room about one quarter of the of the total space. So that would be a 60 million square foot facility. >> So So now you know why we're starting to call this terra app because if you look at the gigafactory in Texas that's I think about coming up on the single building structure I think is coming up on 11 plus million square feet. So we're talking about a facility at least if you think about it as a singular facility to do 100 million you're talking about something around 50 to 60 million square feet. So 5x 6x the sizing. Now being reasonable about this, I think Elon wants to look at this and take a fresh perspective because and he's mentioned this. He said, you know, we can't possibly build a fab with a sizing of clean rooms. So we need to, you know, we need to be in a place where, his words, smoke cigars and eat cheeseburgers on the fab floor. And what he meant was we need something that goes beyond the typical clean room design. And so that is what uh is known as encapsulation. And what that means is that when you take the silicon wafers in, you shuttle them in a system that are essentially like little vacuum sealed or or neutral, you know, clean air capsules. And those capsules enter the lithography machine and the rest of the process and which inside the lithography machine itself it's a it's it's effectively a clean room. So if if you have this encapsulation and and there's a name for it. I actually wrote it down because it's not familiar to me. It's uh these are called I think FAPS or FUOPS. I don't know the pronunciation of the acronym but it stands for front opening unified pods. So again, you're you're taking the wafers and you're sticking them in these little pods and you're shuttling the pods around and the pods are effectively clean inside and then they enter units where they're the lith for example lithography machine where where it's effectively clean itself and therefore they can theoretically shuffle around the factory in a non-clean room environment. So I think what what his approach is is he's going to have to kind of go a bit more first principles. I mean that's how he thinks anyway, right? And say like what's how do we redesign a fab in its general principle to to to produce something that's higher yield for the effective square footage of the facility because it's impossible to make 100 million square foot facilities like you know snap your fingers, right? No, it's it's crazy because it's when you think about it, it's like so much space, so much I'm pretty sure this is this is going to be a very expensive investment. The question is is that how how soon can they I mean Elon was talking about March 21st, which is from the recording of this video is about four or five days. What does that mean? Does that mean it's it's already been under development? Are they going to start building it out? Because Elon in a recent interview he was talking about um they're going to start out with a small terra fab first, small fab first. How do you build? >> Yeah, they're going to start up with that and then they're going to see how it is and they're going to go for a big one. But what won't this take at least a decade and won't this take at least >> I don't know how many billions of dollars this is going to take but is this going to take a long time? Yeah, I would I would suggest that this is a minimum of at a sizing of a yield, you know, yielding 100 million chips a year. This is a million this is this is a minimum of a $50 billion investment. >> Like I can't I couldn't figure out a way to make it smaller than that. It's obviously it's you know like if you think about the sizing of this thing and the timelines so 50 million sorry 50 billion on up tens of millions of square feet and I would say 3 to 5 years till you get yield out of this thing you know and when I say that you know that's being I I think I think that's aggressive but I think that's Elon style aggressive like in other words if there's no doubt he's sitting down with his design team here and thinking how do we speedrun this it's he can't possibly be thinking in a decade model because um he's going to hit this pinch point, you know, at the end of this decade alone. So, he's only got four plus years of runway through the other fabrication facility uh vendors. Look, this is these are realities here. Like, it's not he's not deciding, I want to go do it Terra Fab because it's fun and exciting. um he's doing it because it's necessary and necessary. >> You know, it's it's it's a massive undertaking. It is the biggest in this in in the world of big things he's done. This is probably the biggest at least on the Tesla side for sure. So, I would expect, you know, and yes, it was he did that interview, I forget which I think it was the the cheeky interview where he said, you know, how do you build a a big fab? You start by building a small fab. >> I think what this 7-day thing means is it's a bit tongue and cheek. I don't think 7 days has any real relevance but I think what he's saying is I am formally kicking off this uh we have come up we've come up with a structure I claim by the way my claim and this is independent a lot of people think this way but we have no confirmation that this will be a joint venture between today's Tesla and today's SpaceX because of the >> Yeah. >> Yeah. I mean, I kind of think in a sense that because SpaceX is a large consumer of these chips and because SpaceX will IPO and and SpaceX will collect, you know, 50 plus billion dollars in the cheapest capital model possible, the IPO, that that's probably where a decent amount of the capex investment is going to come here because, you know, Tesla has done its capex budget for 26 and we know that I don't know what the number is but it's like 20 billion or so and off off those books is this project this project is not included in that they've said it out loud you know it's not included >> so so I think what they have agreed on within or he has directed is that he's got a formally budget budgeted project and now he has to start the process of staffing you know with the right set of manu fab manufacturing folks that can do that side of it. I mean, he's got chip design is off, right? I mean, chip design is off and running. That's irrelevant because chip designers send things out to fab. I mean, obviously there's there's some great collapsing efforts between chip like you know what we know about Elon is that he if you look at Cyber Cab like it's the ultimate design for manufacturing vehicle, right? Yeah. So, so I guarantee that chip design and fabrication of chips under Elon's governance will be a very tightlyknit thing. But clear, it's clear that he needs to go off and hire a boatload of people who know about chip fabrication from the science all the way down to the manufacturing buildout of of of the facilities. And so I think what we've got is we've got a a formal kickoff of of of this. And what we will hear next, maybe informally, possibly formally, but usually informally, you know, through some expost is that, you know, this is we've we're building some mini facility because that's what I think is going to happen first. They're going to build. Yeah. >> They're going to agree on approach >> which I think will be this whole kind of non-clean room clean room approach and stuff I don't even know about honestly. But you know I'm I'm a software guy so everyone listening please understand when when Phil's talking about chips he barely knows what he's talking about. Um but you know he will he will design upon approach and then he will build a prototype of that fab. So he can you know there's no way he will know if his approach operates at scale or meets his speed requirements without building a prototype. And I mean that prototype I would expect to be no less than 100 to 200,000 square foot facility. Just a guess because it has to be kind of a fully encased prototype. it must be capable of, you know, silicon in and chip out kind of model. >> And it makes sense for them to make that small uh facility and as he launched it himself, make small mistakes here and there, figure it out. Once they know how it is, they can go ahead to make the big one. It's not going to be I mean this is not going to be there's no >> this is going to be extremely I mean he's look he's done some big things but the things that he's done that are big that he's dipped his toe into like the 48 the 46 was it 4680 cells >> 4680 >> um you know >> refinering >> yeah that whole vertical integration there that took way longer than I think he expected it to take and you know it's it's it's it's >> it's a very difficult thing if you if you want to duplicate I mean kind of if you think of it as a straight line approach if you want to duplicate the current model of how a fab operates that's a long timeline alone if you want to innovate to create scale the thing you may give up is time and what he doesn't have is time >> time >> so he's got to innovate >> on time and scale at the same time, right? I mean that that >> so so it's going to require some, you know, clean thinking so to speak because or first principal thinking because he can't he can't like reinvent how chip fabrication works and say I got it and 10 years later show up with something. Yeah. So, you know, his problem hits him head on by the end of this decade. You know, as far as I know, other than having low demand for your products, which is not the goal here, there is no way around this around this uh log jam. >> This Elon's biggest challenge yet. Let's see how how what the timeline of this is going to be. But if you would have gone back in 2020, 2021 and say that Tesla's going to build a uh you know, terapab and get into chip business, you know, no one no one would believe you. No, no one would believe this at all. they would go like, "Oh, no. They're going to focus on the cars and then the full self-driving and then the stuff." Then came Optimus. Now they're like, "Okay, now with this Optimus stuff, we need to have as many chips because if we can't have chips, we can't scale." And so now we're here with this Terra Fab thing that's going to start in a couple days, few days or maybe next or tomorrow, depending when this video comes out, this podcast comes out. But yeah, this is Elon's biggest challenge yet. And um I'm just curious to see how this is going to all end up how this is going to um play out cuz Elon is not going to wait 10 years to build this or he's not going to allow this project to take 10 years to build. He's going to >> for sure find a way to make this done within the next five maximum six years to get this thing up and running. So let's see what he has up his sleeve with this stuff. >> You know I think um Elon has spoken about I wanted to bring up two things and one is vertical integration and two is the limiting factor. >> Yes. Um Elan's talked about you know vertical integration more than anyone vertically integrates from lithium refining all the way up >> and software >> if you hear him speak about it he often says I don't do you know why do you vertically integrate well you get control you get you know your you get the best pricing for your you know your your your bomb cost of product is is diminished but you also in some cases just have to do it out pure necessity. And Elon has spoken about that several times. You know, I just wish there was a supply chain that produced the types of products I wanted so I could just buy them and be done with it and, you know, give up a few margin points. But in many cases, he finds that in order to meet his scale requirements that vertical integration is the only way to go. The other thing is we got insight is you know someone asked him in one of these interviews and boy there's so many interviews that I I lose track of where this was said but the question was you know it might have been the cheeky interview as well um he was asked you know how does he spend his time and he said I focus on the limiting factor and what what he means by that is you know a lot of times you would think okay what you do is you go focus on that you know in large scale in enterprises the size of Tesla or SpaceX There's going to be teams that are going to fail for one reason or another. You know, either the project has the wrong leadership, the project's too hard, it's underst staffed, you know, there's a multitude of things that can lead to failure. I think we we have some examples of that in in Tesla's recent past, like the Dojo chip program that was trying to do the data center model chips, and that was a, you know, a fail. and and then Elon reworked that uh and focused mostly on the the the infant side chips and the future that way instead of trying to worry about the entire architectural housing. Sometimes it's the team. But what I think what Elon's saying is the limiting factor is really the thing that he needs to put his time on and that is the very thing that is going to be that pinch point in some product cycle. like he's going to look at something and say it's either it's it the yield of this thing or the time of the time it's going to take this thing to come to fruition is going to pinch me and I'm going to therefore put my energies. So what we saw was in the fall this year last year we saw Elon spending an enormous I mean because you could tell by his post velocity you know he's talking about is >> he talked about AI5 chip I mean a hundred times and >> and you know he was saying I've spending all my Saturdays in >> Tuesday you know and it's be right up the street from me here where where that Saturday was spent. Yeah, it's always exciting for me to think about that, right? The brain of Elon >> it's right here >> is not too far far away, you know, when it comes to software and hardware. Anyway, long story short is what's going on there is, you know, that was the limiting factor. That was the thing he was saying, look, I I I need in order to I need to get this AI5 thing cleaned up and taped out. You know, that's the word. And if that word isn't familiar to folks, tape when you design a piece of silicon or a PCB, you you have this process that people call taping out, which is really just kind of a it's a process where you said that, you know, you put the design on tape and sent it off to the fab. And you know, obviously that stuff is transferred electronically today, so it's kind of an old term, somewhat meaningless term, but he needed to get to this tape out point where basically it what that effectively means is I've dropped it out of my hands. it's off to fabrication and AI5 was not getting the love because I think a lot of the the chip design was split between Dojo and and AI5 at the time and and and so he kind of came in and said, "Okay, there is no more Dojo. We're going to exclusively um build the family of inf you know basically build out the family of inference style chips from AI5 AI6A assignment and so forth and uh got serious about that and and dug in himself and you know and I think I would guess that that's been taped out like we haven't heard we we thought it was going to be end of year or January and I I suspect that it's it's off to fabrication now. So, and you know that takes a while until that chip is in the hands of the fabrication process. I mean, it's it's one thing to fab a chip. >> So, you you know you have you have a design, it gets finalized, it gets taped out, it gets sent to the fab. The fab will set up the masking on these lithography machines and build your first set of chips. They'll they'll get some yield. They'll get some early tests and then they'll send them back to the to the chip design team to basically bring up the chip, you know, basically boot the chip um and see and run it through all the tests to see if it didn't, you know, it doesn't because there could be other reasons beyond yield that it could fail. it could just be designed incorrectly and uh you know it might have to go back as a second turn which would which you know every time you turn one of these chips you're talking about 3 to six month intervals so you don't want to do that you want to make sure you can do good test and simulation before you tape out I mean you know who's really good at this Apple right Apple >> Apple starts uh a new A series chip for iPhone or potentially M series for MacBook and and etc. practically every February and delivers in October of the next year, right? Because at le cycle, >> iPhone cycle, >> yeah, they start they start they probably start the chip design a bit earlier, but let's call it January. And then they've got to have they've got to have yield at quantity to meet consumer demand by mid-occtober at the latest. Right. >> Yeah. >> Um and you know they've gotten into I mean Apple's silicon team is quite excellent. Obviously, we all know that silicon is almost the shining star at the company at this point and they've got this down really really well and uh so so Elon needs to be the the next shining star here. >> Yeah, a lot of uh I'm seeing some onx a lot are saying that this could be a threat to tsmc, Nvidia, Samsung chips. I don't see it that way. I see it as like because every single chip that Tesla's going to make or SpaceX is going to make depending, you know, when the merger is going to happen, um they're going to use every single one of them. They're not going to sell it to anybody else. >> Yeah, these are right. I mean, it from a from a direct threat, these chips, I mean, effectively they're Tesla property today. If we look at the organizational structure, we think about it. So, if they're selling to anybody, they're selling to SpaceX. >> Yeah. >> Um and only SpaceX. I mean internal consumption at Tesla and then if there's any external customer it's SpaceX and then it stops there. So are they competing with let's say let's let's take the obvious vendor of the largest vendor of inference chips Nvidia are they competing only in so much that they're filling the space that an Nvidia chip might take in like you know a bot or a car >> but uh you know Tesla's doing a vertically integrated package here so uh they're simply not in this regard they're not a customer of Nvidia they are for training cortex so forth work but uh yeah so there's no there's no competition in that regard they will not be selling them externally they might be selling compute what I mean by that is if you know if they build out a massive data center in space a SpaceX property you know this is inference and and who consumes the inference and maybe that is exclusively consumed by digital optimists, >> but maybe it's, you know, it's general inference, right? Because, you know, at this point, what we're talking about is the chips are are sort of general inference chips. They're really focused on that. They're they're they're high-end class inference chips that sort of rival the Nvidia class chips here. So, they compete for the same compute flows. In that sense they're competitive because the >> uh you know we need to divide up training and inference and we have to at least >> explain that you know training of course is building the models that then inference is the runtime it means everybody's use of AI is effectively inference when you're in a car and it's driving you you've got FSD enabled that's inference when you have a bot uh optimist doing its thing that's inference um but inference is used a lot within training today because um like if you take Cortex which is the the data center for training FSD and ultimately Optimus there's a lot of NVIDIA GPUs in there building the model but there also a lot of AI4 boards that are running reinforcement learning which is basically trying to mimic the car's behavior as part of the training cycle but generally speaking we should divide it up into training chips and you know exclude Tesla and XAI use a lot of Nvidia chips for those that domain but inference itself is unbridled and it's you like in other words if you build a good model you build that model once then inference is every possible use of that model and that means inference by I don't know by you know training is 1 millionth the amount of in compute in total bec I made that number up, but it's basically inference runs to infinity. Um, >> right? >> You know, unless you want to build a new model, which by the way, you're always doing anyway. So, but I mean like for example, when when space launches a a data center in space, that will be inference. that will not be for training purposes because >> uh the reason I mean I'm not saying it it can't eventually get there but training cycles require coherency. What that means is that you're taking all these GPUs and you're uh kind of building a supercomput out of them and they all have to run the training batches together as like a unified compute process. Inference is go off to this chip, do a task, come back for every user out there. you know, like when you ask a question in Grock or chat GDP, that's going off to a couple GPUs and doing its thing. So, that's an easier deploy if you if you're putting up uh thousands to a million satellites in space and you don't have to worry about building this coherent training cluster. You're just setting it up for, you know, earthling use of AI. >> Oh boy. Data space, data centers in space. That's something that for some reason my mind can't comprehend, but that's going to uh you know going to be a very interesting thing to see when that's going to happen. When do you think that's going to happen? By the way, just quick question here about it. When do you think they're going to have data data centers in space? Specifically, of course, SpaceX. >> Instead of giving you a number, I mean, I will give you a number. I'll say, you know, not before five years from now. >> Five years from now. >> Okay. Um minimum maybe minimum timeline. I I think what's important to when when instead of like I usually never actually look at it and say what's the what's a time point I look at the limiting factors. Data centers in space are 100% reliant on Starship. Starship is the only economically capable launch vehicle to get that many satellites. Remember when we talk about a data center in space maybe this should be said um because it wasn't. A data center in space is a collection of satellites much like Starlink. >> It's it's a higher orbit uh set of satellites with they're different than Starlink satellites but they're the same in the fact that they operate together as uh because Starlink satellites actually talk to each other through laser links. They transfer data. So for example, you know, if you think about our circumstance in places like Iran today or uh o over oceans, Starlink has to talk to ground stations. And in some cases when that those ground stations are hundreds of or earth miles, landbased miles away, those satellites have to transfer data from, you know, you're talking to a terminal on the ground. You have to talk to another satellite. And they use these optical laser I think they call ISL's intersatellite linkages through uh through laser to transfer data. these satellites that will be launched for to to build this grid of satellites that is the data center, you know, and these could be done as as small as a 100 satellites or as large as a million. I mean, whatever. It doesn't matter what the the unit is, but think of them as a as a large grid. They will all be talking through each other and then down to Starlink to get data to Earth. Because think of it as you've got a set of satellites up here that will have to communicate with a set of satellites here that actually send data back down to Earth, you know, using the the other type of linkage. So Starlink is a required portion of this fabric. You can't do space space data centers without being able to communicate to Earth. And that's Starlink. So uh there'll be hundreds or thousands or millions of these these satellites and the only way to do this where the numbers start to make sense where you start to compute and say oh space-based data centers make sense is through Starship and Starship's current limiting factor is reusability. So where Elon has to, you know, things you have to kind of check off the boxes, right? You have to check off the box of getting Starship to its reusability status, much like Falcon. Once it gets to that kind of launch cadence and ability to reusability effectively, then all of a sudden it becomes economically viable and volume feasible to put a data center in space. So that has to happen. Secondly or secondarily, you know, a whole design has to occur of this next generation satellite and it goes from design of the actual satellite to how it houses the chip, how it collects the energy, solar obviously, but how it does it sort of a effective replica of of how a Starlink satellite operates today. And then you've got to build a chip that operates effectively in space. And you know, I think Elon has effectively said that that's like AI7. So if you look at right now AI5, call that taped out. Think about a one to one and a half year iterate iterative cycle between generations. So you've got AI5 coming online in 27, therefore 28 and a half. and you'd be at somewhere in 2030 before AI7 probably got to, you know, a class where you you had available yield that you could put that on a satellite and put it in space. So I can't I can't see anything less than 5 years >> to see those happening soon or within 5 years or by in 5 years is going to be very interesting to see. But I mean 2026 2031 I think that 2030s are going to be the uh the the roaring 30s for Tesla or SpaceX at that time. >> We >> when everything everything comes together and and they have the pretty much all the bottlenecks if there isn't in World War II. Let's knock on wood on that. If there if there isn't, I think everything is going to be sinking very nicely with all the vertical integration that Tesla has done with Elon and SpaceX. Of course, it's going to be a very fun decade, a very interesting, very fruitful, very fruitful decade. I can say that. >> I had a really good talk the other day with CERN about the buildup. And what we're seeing is this incredibly massive buildup across Elon's properties of putting these large large pieces in place. You know, if you think of it as like a puzzle, he's moving a lot of puzzle pieces and these are big big things like you know, in some cases, think of his chip business being effectively bigger than Nvidia. I mean even even if internally cons right you know think about his manufacturing footprint for Optimus being bigger than his current manufacturing footprint for vehicle. Think about, you know, what what has taken place to get this unboxed manufacturing process to cycle vehicles known as Cyber Cab at the cycle time or at the at the production time he's looking for, you know, whatever 10 seconds and multiple lines, whatever it is. But the point is, he's lining up these incredibly large pieces that, you know, once they kind of all come together, you end up with what you suggest is like kind of that roaring 30s or whatever. you get this effect where these things that he's been talking about are starting will start to get dumped out into the market. And you know it's it's it's funny because if you think about what's happening here we're in this also in this apprehension you know this apprehensive phase of of investors in Tesla where they're people are very concerned it's like well where is cyber cab you know what or why is where is robo taxi is it is it operating at scale what's happening I said this before 2026 is the is the first big building year of what I think like I never expected in 2026 to have massive deployment at robo taxi but I think we are lining up the big pieces for robo taxi to get to that scale button and it it can I mean when I say that it it certainly we're very close we're right on the bubble of getting there but those are big pieces you know I mean even even that alone are big pieces you've got to get this autonomy AI absolutely squared away till it's you know 10x safer than a human driver and you've got to get which I think we're extremely close to, really close to. We've got to get the robo taxi service platform in place. I mean, people don't even think that that's a thing, but it is. I mean, it's not robo taxi isn't a vehicle and FSD. Roboaxi is a vehicle FSD and an entire service platform. You know, that entire platform has to exist to to source rides and and agg uh manage demand and, you know, do all those other pieces. And that includes the the the consumer app, the you know the robo taxi app for for iPhone and Android and so forth. And those those things are considered now e the easy parts but they're not you know they took years for for Uber to build that. I mean if you think about it what is Uber? Uber is just the platform. So Elon's building the platform plus the physical vehicles that operate on that platform and the operational software that runs the vehicles. So those are three very very big things uh to approach a product that or a service in this case that you know meets the scale requirements and pushes the price per mile down to zero eventually and that takes time and I you know and I expect people will be you know not everything's perfect when you when you when you go from an engine when you go into the engineering discipline it is not as if you can plan these outcomes with precision. A lot of it is kind of how you build a rocket. You know, you build one, you launch one, it blows up, you you figure out what went wrong, you fix what went wrong, you try it again. Uh you it blows up again, but it blows up now for a different reason and hopefully a smaller reason and you iterate till you get to some point of of market perfection. You know, in other words, something you and this is where we are right now with with robo taxi. You know, we're close, but we're not exactly there. And yet, in the scheme of things, that's a smaller part than the parts we just talked about. We started the conversation completely about Terraab and then and you know, this undertaking of doing terrafab is really bigger than almost the things he's done already combined. >> That's absolutely right. Talking about the terraad terraab again. So, when Elon made this announcement that it's going to start in seven days, many long-term Tesla investors were like, "Yeah, we knew we knew this was going to happen. We knew that um it's necessary for scaling having that roaring 30s, 2030s." But there were some that's like my god, you know, the stock's been flat for six years and now we have another project and that's going to be more negative because Elon did say that there's going to be a $20 billion capex for this year >> and that's excluding these terap. >> Now Terapab of course is you know as we were talking about this could be the price could be as high as the investment here could be as high as $50 billion. Now, of course, this is most likely a not most likely is going to be a joint venture with SpaceX, XI, and Tesla making this. So, it won't be the burden won't be all on Tesla. But in the short term, some of these naysayers are going like another couple years of flatness. >> Yeah. I don't I don't think that's um Yeah, I don't >> What do you think about that? >> Uh well, I don't think it's I don't think it's accurate. Uh you know, I think robo taxiing is a six-month problem. You know, maybe >> That's right. Yeah. um maybe you know full scale starting in 27 but we're going to see some big scale happen this year for sure and and things are lined up we you know AI4 chip is sufficient for it that's been proven cyber cab is pretty much at start of production I think we've seen a 100 cars come off the line that is a very very good sign you know that the big the big thing for any vehicle program is SOP start of production and we and and that uh if if that will hit I believe in April on timeline because we have a lot of evidence that it's very close. Regulatory is a certainly an issue and and we see some really good stuff happening at in the US at the federal level but but irrespective of that we we know there's plenty of um markets that will accept autonomous vehicles in the in the US at least at to the scale point that Tesla could possibly hit and those are all markets that Whimo is in. So if Whimo is there, Tesla can be there too. And then and then lastly, we have this thing called the AI, you know, FSD itself. And and effectively robo taxi AI is unsupervised FSD, right? And and you know, what's the hold up there? And and a lot of people are asking what is the hold up there? And and my sense is that I don't think I would at all be surprised if I walked inside of Tesla and I found out that there were a list of 30 25 to 30 problems that are quasi or exclusively FSD issues that they're trying to uh nail before they release this thing in the wild. uh you know they've got call it one to eight fully unsupervised robo taxis operating in Austin, Texas today but I believe that they have we know FSD has a is a long tale of problems and I think they are looking at I think a lot of the problems they have right now are not safety but but more more about service delivery what I mean by that is it is about pickup and drop off and getting that right because it's very that's a very complex and strange problem that isn't just a perfect AI problem it's it's it's a lot other stuff, too. So, I believe that there are probably 25 to 30 items that are are being knocked off this list, and it's probably two or three a week. So, I mean, it slow, but you know, so that's why instead of seeing a January or February, we're sitting here in March, but remember, it's only March. It's not like, you know, eons from from February. It's really only a month plus out from where people thought it was going to um start to scale in in Austin particularly. So, uh could that happen in April with Cyber Cab? Yeah. And I think my guess is that they're they're holding back uh they're fixing these issues within FSD and they're really targeting a cyber cab style deployment for initial robo taxi. Like in other words, dial back a bit on the mile model Model Y's and go in Cyber Cab because Cyber Cab is is uh is happening as we know. >> Um and I would say that my biggest criticism of Tesla at large is that it's probably understaffed. You know, if I were to if I were to if I were to leave something on the floor and my ne most negative opinion, I'd say that and what I mean by that is what I what I think Elon tends to think of of most of the problems that he's approaching as engineering problems and therefore they should have engineers on the problem and and when you have a very high-end

Related coverage