Written by: Long Yue, Wall Street News
Recently, the American startup accelerator Y Combinator (YC) held its first AI Startup School in San Francisco and invited many heavyweights in the AI industry to attend, including Elon Musk and OpenAI CEO Altman.
Musk, who recently completed his 130-day term as a special employee of the U.S. government's Department of Government Efficiency (DOGE), bluntly described the experience as an "interesting side quest," but its importance pales in comparison to the upcoming AI revolution. He compared the work of the government efficiency department to "beach cleaning," while the upcoming AI is a "thousand-foot-high tsunami."
Fixing government is like… the beach is dirty, with needles, feces, and trash. But then there’s a thousand-foot wall of water, which is the AI tsunami. If there’s a thousand-foot tsunami coming, what’s the point of cleaning up the beach? Not much.
Musk predicts that digital superintelligence may arrive this year or next year and will be smarter than humans. At the same time, the number of humanoid robots will far exceed that of humans in the future, possibly reaching 5-10 times the human population. He boldly predicts that the scale of the AI-driven economy will be thousands or even millions of times the current scale, and the proportion of human intelligence may drop below 1%. The following are some key points of his speech:
• Musk announced that he had left DOGE on May 28, ending his 130-day term as a special government employee, saying he was "returning to the main mission";
• Musk likened the work of government efficiency departments to "beach cleaning" and the coming of AI to a "thousand-foot-high tsunami," and when the latter is about to arrive, the former will be meaningless in comparison;
• Predicted that digital superintelligence may arrive this year or next year and will be smarter than humans, and he stressed that "if it doesn't happen this year, it will definitely happen next year";
• In the future, the number of humanoid robots will far exceed that of humans, perhaps 5 or even 10 times that of humans;
• Predicts that the scale of the AI-driven economy will be thousands or even millions of times the current size, pushing civilization toward Kardashev II (stellar energy level), and the proportion of human intelligence may drop below 1%;
• Musk emphasized that "rigorous adherence to the truth" is the most important cornerstone of AI safety, and forcing AI to believe in untrue things is extremely dangerous;
• Looking back at the early days of SpaceX, the successful fourth rocket launch after three consecutive failures was a "life-or-death situation". In 2008, Tesla's financing was completed at the last minute before bankruptcy.
DOGE mission accomplished: too much political noise, return to the "main mission"
Musk admitted in the interview that his experience in Washington, D.C., made him realize that "the signal noise in politics is terrible." He described his work in DC as an "interesting side quest," but ultimately decided to "return to the main quest - building technology, which is what I like to do."
The billionaire explained the fundamental reason for his departure from government: “Fixing government is like cleaning a beach — the beach is dirty, there are needles, feces, and trash. But at the same time there’s this thousand-foot wall of water, which is the AI tsunami. If you have a thousand-foot tsunami, how important is it to clean the beach? It’s not that important.”
AI superintelligence is imminent: coming this year or next
Musk gave a very clear prediction about the arrival of digital superintelligence. He said: "I think we are very close to digital superintelligence. If it doesn't happen this year, it will definitely happen next year."
He defines “digital superintelligence” as “intelligence that is smarter than any human being at anything.” Musk predicts that AI will drive the economy to exponential growth in size — “not 10 times bigger than the current economy, but thousands or even millions of times bigger.”
AI is going to change the future so profoundly that it’s hard to fathom… Assuming we don’t go astray and AI doesn’t wipe us out and itself, what you’re going to end up with is not an economy that’s ten times bigger than the current economy. Ultimately, if our… descendants (mostly machine descendants) become a Kardashev II civilization or higher, we’re talking about an economy that’s thousands, if not millions, of times bigger than today’s economy.
He further elaborated on the place of human intelligence in the future: “At some point, the percentage of human intelligence is going to be quite small. At some point, the aggregate sum of human intelligence is going to be less than 1% of all intelligence.”
xAI is currently training Grok 3.5
Musk revealed in the interview that xAI is currently training Grok 3.5, "focusing on reasoning capabilities."
According to ZeroHedge, xAI is seeking $4.3 billion in equity funding, which will be combined with $5 billion in debt financing to cover xAI and social media platform X.
Hardware Competition: From Zero to 100,000 GPUs: An Engineering Miracle
Musk used first principles thinking to solve the hardware challenges of AI training. When suppliers told them that it would take 18 to 24 months to complete a training super cluster of 100,000 H100 GPUs, Musk's team compressed it to 6 months.
They rented an abandoned Electrolux factory in Memphis, solved the 150-megawatt power demand by renting generators, rented a quarter of the mobile cooling equipment in the United States, and used Tesla Mega Packs to smooth power changes during training. Musk even personally participated in the wiring work and "slept in the data center."
Currently, the training center has 150,000 H100s, 50,000 H200s, and 30,000 GB200s, with a second data center with 110,000 GB200s coming online soon.
Multiple visions of the future: robot armies and interstellar civilizations
Musk predicts that in the future there will be at least five times as many humanoid robots as humans, "maybe 10 times." He admitted that he had procrastinated in the field of AI and robotics because he was worried about "making the Terminator a reality," but eventually realized that "it was going to happen whether I did it or not. You're either a spectator or a participant. I'd rather be a participant."
In his grander vision, Musk places human civilization on the Kardashev scale. He believes that humans currently only use 1-2% of the Earth's energy and are still far from being a Level 1 civilization. Becoming a multi-planet species is a key step in expanding consciousness to the stars, "greatly increasing the possible lifespan of civilization or consciousness."
Musk said SpaceX plans to transfer enough material to Mars in about 30 years to make Mars self-sufficient, "even if the supply ships from Earth stop operating, Mars can continue to develop and prosper."
The full interview is as follows (translated by AI)
Elon Musk
We are in the very, very early stages of an intelligence explosion. Becoming a multi-planetary species greatly extends the possible survival of civilization, consciousness, or intelligence (whether biological or digital). I think we are very close to digital superintelligence. If we don't get there this year, we will definitely get there next year.
YC CEO and President Garry Tan
[MUSIC] Let's give a big round of applause to Elon Musk. [APPLAUSE] Elon, welcome to the AI Startup Academy. We're really, really honored to have you here today. From SpaceX, Tesla, Neuralink, xAI, and so on. Before you did all this, was there a moment in your life where you were like, "I have to do something big"? What made you make that decision?
Elon Musk
I didn't think I could make anything great. I just wanted to try to make something useful, but I didn't think I could make anything particularly great. If you think in terms of probability, it seemed unlikely, but I at least wanted to try.
Garry Tan
You are now facing a room full of people, all of whom are technical engineers, including some rising top AI researchers.
Elon Musk
Okay. I think we should... I prefer the word "engineer" rather than "researcher." I mean, if there's some basic algorithmic breakthrough, that's research, but anything else is engineering.
Garry Tan
Maybe we can go way back. I mean, you're in a room full of 18 to 25 year olds. It's a little bit younger here because the founders are getting younger. Can you put yourself in their shoes? When you were 18, 19 years old, you know, learning to program and even coming up with the first idea for Zip2. What was that like for you?
Elon Musk
Yes, back in '95, I was faced with a choice: either I could go to Stanford to do graduate school, a doctorate, actually in materials science, studying supercapacitors, which I wanted to use in electric vehicles, essentially to solve the problem of electric vehicle range; or I could devote myself to this thing called the Internet, which most people had never heard of at the time. I talked to my professor, Bill Nix in the materials science department, and I said, can I take a semester off? Because this (the Internet) is likely to fail, and then I have to go back to school to continue studying.
And then he said, this might be the last time we talk. And he was right. So, but I was thinking things were more likely to fail than to succeed. And then in '95, I wrote... basically, I think it was the first or close to the first Internet map, directions, white pages, and yellow pages.
I wrote all that code myself, and I didn't even use a web server. I read directly from the port because I couldn't afford it, and I couldn't afford a T1 line. The original office was on Sherman Avenue in Palo Alto. There was like an ISP downstairs. So I drilled a hole in the floor and ran a cable directly to the ISP.
And then, you know, my brother joined me, and another co-founder, Greg Curry, who has since passed away. We couldn't even afford a place to live, so we were... We rented an office for $500 a month, and we slept in the office and showered at the YMCA on Page Mill Road. Yeah, we ended up having a somewhat useful company, Zip2, in the early days. We did build a lot of really, really cool software technology, but we were kind of "captured" by the traditional media companies, because companies like Knight-Ridder and the New York Times were investors, customers, and on the board.
So they were always trying to use our software for things that didn't make sense. So I wanted to go direct to consumers. Anyway, I won't go into the details of Zip2, but the bottom line is that I really just wanted to do something useful on the Internet. Because I had two choices: either get a Ph.D. and watch other people build the Internet; or participate in building the Internet in some small way. I was like, I guess I can always try it first, fail, and then go back to grad school. Anyway, it turned out to be quite successful. It sold for about $300 million,
That was a lot of money back then. Now, I think the minimum bid for an AI startup is $1 billion. It's like... there are so many fucking unicorns now, it's like a swarm of unicorns, you know, a unicorn is a billion dollar valuation.
Garry Tan
There has been inflation since then, so the money has actually lost a lot of value.
Musk
Yeah. I mean, in 1995, you could buy a hamburger for like five cents? Well, not that much, but I mean, yes, there's a lot of inflation. But I mean, there's a lot of hype around AI right now, as you can see. You see, you know, you see companies that are less than a year old that are sometimes valued at billions or even billions of dollars. I guess some of them can succeed, and they probably will succeed. But it's really eye-opening to see some of these valuations. Yeah, what do you think? I mean,
Garry Tan
I'm personally very bullish. I'm actually very optimistic. So I think there's going to be a ton of value created by all of you in this room, you know, there's probably a billion people around the world using these things. We haven't even scratched the surface yet. I love that story of the Internet, and even back then, you were very much like all of you in this room, because you know, all of the CEOs of the traditional media companies looked to you as the guy who understood the Internet. And now, for the vast world that doesn't understand what's happening with AI - the corporate world, or the world at large - they're going to look to you in this room for exactly the same reasons. It sounds like you know... What are some of the practical lessons? It sounds like one of them is don't give up board control, or be very careful, have a really good lawyer.
Musk
I think the biggest mistake I made with my first startup was letting the traditional media companies have too much shareholder and board control, which inevitably led them to see things from the perspective of traditional media, so they would let you do things that seemed reasonable to them but didn't make sense at all in the context of new technology. I should point out that I didn't actually plan to start a company initially. I... I tried to get a job at Netscape. I sent my resume to Netscape. Mark Andreessen knew about it.
But I don't think he ever saw my resume, and then no one responded. So then I tried to hang out in the halls of Netscape to see if I could "run into" anyone, but I was too shy to talk to anyone. So I was like, oh my god, this is ridiculous. I'll just write my own software and see how that goes. So it wasn't really from a "I want to start a company" standpoint. I just wanted to be part of building, you know, some part of the internet. Since I couldn't get a job at an internet company, I had to start an internet company. Anyway, yeah. Yeah. I mean, AI is going to change the future in a profound way. It's hard to estimate how much, but you know, the economy, assuming we don't take detours,
And AI doesn't kill us and itself, then you're going to end up with an economy that's not 10 times bigger than the current economy, but ultimately, if we become, say, or whatever our future machine descendants are, or mostly machine descendants, a Kardashev Scale 2 or higher civilization. Then we're talking about an economy that's thousands, maybe millions, of times bigger than it is today. So, yeah, I mean, I did kind of feel like, you know, when I was in Washington, D.C., I got blasted for cleaning up waste and fraud, and that was kind of an interesting side quest, as far as side quests go. But getting back to the main quest. Yeah, I got to get back to the main quest here. Um, but I do feel like, you know, it's kind of like... it's like government reform is kind of like... it's like the beach is dirty and there's needles and feces and trash and you want to clean up the beach, but at the same time there's a thousand-foot wall of water - that's the AI tsunami - if there's a thousand-foot tsunami coming, is there really much point in cleaning up the beach? Not much. Oh, I'm glad you're getting back to the main quest. This is very important.
Yeah, back to the main quest. Building technology, that's what I like to do. There are too many distractions. The signal-to-noise ratio of politics is terrible.
Garry Tan
So, I mean, I live in San Francisco, so you don't have to tell me twice (I get it).
Musk
Yeah, Washington, D.C. is like, you know, I guess all of Washington is politics, but if you're trying to build rockets or cars or you're trying to get software to compile and run reliably, then you have to pursue the truth to the utmost or your software or hardware won't work. Just like you can't cheat math, math and physics are hard judges. So I'm used to being in that environment where the truth is pursued to the utmost, and that's certainly not politics. So anyway, I'm glad to be back in, you know, the tech world. I think I
Garry Tan
Just curious, back to the Zip2 moment, how many hundreds of millions of dollars did you have, or how many hundreds of millions of dollars did you cash out?
Musk
I mean, I got 20 million, right?
Garry Tan
OK. So, you at least solved the money problem. And then you basically took it and continued to gamble, and you continued to be involved in X.com, which later became PayPal and Confinity (merger).
Musk
Yes. I left my chips on the table.
Garry Tan
Not everyone does this. Many of you will have to make this decision in the future. What motivated you to fight again?
Musk
I felt like with Zip2, we had built really cool technology that was never really fully utilized. At least in my opinion, our technology was better than Yahoo or anybody else's, but we were constrained by our customers (media companies). So I wanted to do something that was not constrained by our customers, that was direct to the consumer. That's what became X.com/Paypal. Essentially, X.com merged with Confinity, and we built Paypal together.
And then, the PayPal Mafia has probably created more companies than any other company in the 21st century. When Infinity and X.com merged, there were so many talented people. I just thought... I felt like we were kind of tied down at Zip2, and I thought, well, what if we weren't tied down and went direct to consumer? And that's what happened.
But yeah, when I got the $20 million check from Zip2, I was living with four roommates and I had like, $10,000 in the bank. And then this check came in the mail. In the mail! And I went from $10,000 to $20,010,000. I was like, OK. But I ended up putting almost all of it into X.com. Like you said, I left almost all of it on the table.
Yeah, after PayPal, I was like, I'm kind of curious why we haven't sent people to Mars yet. I went to the NASA website to find out when we were going to send people to Mars, and there was no date. I thought maybe the website was hard to find. But the fact is, there were no real plans to send people to Mars. So, you know, this is a long story, and I don't want to take up too much time here, but
Garry Tan
I think we all listened with rapt attention.
Musk
So, so I was actually on the Long Island Expressway with my friend Adeo Ressi. We went to college together (University of Pennsylvania), and Adeo asked me what I was going to do after PayPal, and I said, I don't know, I guess maybe I want to do some charity project in the space field, because I don't think I can do anything commercial in the space field, it seems to be the exclusive domain of the state. So but you know I was curious about when we would send people to Mars, and that's when I realized, oh, it's not on the website, and I started digging. I'm sure there's a lot omitted here, but my original idea was to do a charity mission to Mars called "Life to Mars," which is to send a small greenhouse with seeds and dehydrated nutrient gel to Mars, land on Mars, and then, you know, add water to the gel, and then you have this wonderful shot of green plants on a red background.
By the way, for a long time I didn't realize that the "money shot" was a reference to a porn movie, I guess. But, anyway, the point was that it would be a great shot of green plants on a red background, to try to motivate, you know, NASA and the public to send astronauts to Mars. As I learned more about it, I realized that, oh, by the way, in the process, I went to Russia around 2001 and 2002 to buy intercontinental ballistic missiles (ICBMs), and it was like an adventure. You know, you go to the top Russian commanders and say, "I want to buy some ICBMs." It's for getting into space. Yeah. Not for not for blowing anybody up, but they had to, as a result of the disarmament talks, they had to destroy a lot of their big nuclear missiles. So I thought, well, let's take two, you know, take off the nuclear warheads, and add an extra upper stage for Mars.
But it was kind of a psychedelic thing, you know, being in Moscow around 2001, negotiating with the Russian military to buy ICBMs. It was crazy. But they also kept raising the price for me, so it was the opposite of a normal negotiation. So I was like, my God, these things are getting really expensive.
And then I realized that the real problem wasn't the lack of willingness to go to Mars, but that there was no way to do it without going over budget, you know, even NASA's budget couldn't afford it. So that's why I decided to start SpaceX - SpaceX was to advance rocket technology to the point where we could send people to Mars. That was in 2002.
Garry Tan
So it's not like you start out with the idea of building a business. You just start doing something that you think is interesting and that humans need, and then like, you know, the cat pulls the string, the ball slowly unravels and it turns out to be a potentially very profitable business.
Musk
Now it does make money, but there was no precedent for rocket startups to succeed, although there were some attempts at commercial rocket companies, but they all failed. So when SpaceX was founded, it was really with the idea that I thought the chance of success was less than 10%, maybe 1%, I don't know. But if a startup doesn't do something to advance rocket technology, it certainly can't come from one of those big defense contractors, because they're just vassals of the government, and the government only wants to do very routine things. So, it either comes from a startup or it doesn't happen at all. So, even a small chance of success is better than no chance, so yes, when I started SpaceX in mid-2002, I expected it to fail. I had, like I said, a 90% failure rate, and even when I was recruiting people, I didn't try to sugarcoat it and say it would succeed.
I said we're probably going to fail. But there's a 1 in 10 chance that we might not fail, if this is the only way to get people to Mars and advance the technology. And then I ended up being the chief engineer of the rocket, not because I wanted to, but because I couldn't hire any good people. So, no good senior engineers wanted to come in, because they thought it was too risky, you're going to fail. So I became the chief engineer of the rocket. You know, the first three launches did fail. So that was a learning process. The fourth one was lucky enough to work. But if the fourth one didn't work, I would have no money, and it would have been a complete failure. So that was a very close call.
If the fourth Falcon launch had failed, it would have been the end of us, and we would have joined the graveyard of rocket startups that preceded us. So, my estimate of the odds of success was not too far off. We just barely succeeded. Tesla was about the same time. 2008 was a tough year. Because SpaceX's third launch failed in the middle of 2008 or the summer of 2008, and we had three failures in a row. Tesla's funding round also failed. So Tesla went bankrupt very quickly. It was like, oh my god, this is terrible. This is going to be a cautionary tale of hubris.
Garry Tan
Maybe during that time, a lot of people were saying, Elon is a software guy, why do you want to do hardware? Why... Yes, why did he choose to do this, right?
Musk
Yes. 100%. So you can look at the media at the time, because it's still online. They kept calling me the "internet guy." So the "internet guy" aka the "idiot" tried to start a rocket company. So you know we got laughed at a lot. It really sounded ridiculous, and the internet guy starting a rocket company didn't sound like a recipe for success.
Honestly. So I don't blame them. I was like, yeah, you know, it does sound unlikely, and I agree it's unlikely. But luckily the fourth launch was successful, and then NASA awarded us a contract to resupply the station. I think it was around December 22nd, or before Christmas. Because even if the fourth launch was successful, it wouldn't be enough to guarantee success. We still needed a big contract to survive. So, so I got a call from the NASA team, and they actually said, we're awarding you a contract to resupply the station. I just... I blurted out, "I love you guys." Which is not usually, you know, something they hear.
Because usually it's very, you know, very calm, but I was like, "Oh my god, this saved the company." And then, we closed the Tesla round on the last day, the last hour that it could have been done, which was 6 p.m. on December 24, 2008. If that round hadn't closed, we would have been on back pay two days after Christmas. So the end of 2008 was really nerve-wracking.
Garry Tan
I guess the one thing that runs through your experience with Paypal and Zip2, and jumping into these hardcore hardware startups, is the ability to find and ultimately attract the smartest people in these specific areas... You know, I mean, some of you in this room, some of you haven't even managed a single person yet. You're just starting your career. What would you tell Elon, who has never done any of this, you know?
Musk
I generally believe in trying to do as much useful work as possible. This may sound a little cliché, but it's really hard to do useful work, especially useful work for a lot of people. Like, the area under the curve of total utility, which is how useful are you to your fellow man times how many people? It's like the definition of "true work" in physics. It's extremely difficult to do that. And I think if you set your mind on doing "true work," you're much more likely to succeed. Like, don't go for the glory, go for the work.
Garry Tan
How do you judge whether it is "real work"? Is it based on external feedback, such as what others think or what you know about the use of the product for people?
Musk
Like you know, for you, when you're looking for someone to work, what do you look for? Like you know, you look for someone or them that's a different question. I think it's I mean in terms of your end product, you just say, if this thing is successful, how useful will it be to how many people? That's what I mean. And then you do whatever, you know, whether you're a CEO or whatever role in a startup, you do whatever you need to do to succeed, like and keep crushing your ego, like, internalize responsibility. One of the major failure modes is when the ego to ability ratio is greater than 1. You know if your ego to ability ratio is too high,
Then you basically cut off the feedback loop to reality. In AI terms, you're breaking your reinforcement learning (RL) loop. So, you don't want to break your loop, you want a strong RL loop, which means internalizing responsibility and minimizing ego, and you do whatever task is noble or humble. So, I mean, this is why I actually prefer the word "engineering" to "research." I like that word better, and I don't want to call xAI a lab.
I just want it to be a company. Like, in whatever the simplest, most direct, ideally lowest ego terms are, those are usually good directions. You just want to close the loop on reality hard. That's a super big thing.
Garry Tan
I think everyone here really admires your exemplary role in using first principles. How do you determine your reality? That seems to be a big part of it. People who have never built anything, who are not engineers, like some journalists, sometimes criticize you. But you obviously have another group of people around you who are builders and have a very high...area under the achievement curve. How should people think about this? What methods have worked for you? How would you pass it on to...like your children? How would you tell them to make their way in the world? Like, how to construct a predictable view of reality based on first principles.
Musk
The tools of physics are extremely useful in understanding and making progress in any field. First principles obviously means, you know, breaking things down to the fundamental axiomatic elements that are most likely to be true, and then reasoning upward as logically and clearly as possible, rather than reasoning by analysis or analogy. And then there are simple things like thinking in the limit, like if you extrapolate to minimize this thing or maximize that thing, thinking in the limit is very helpful. I use all the tools of physics.
They apply to any field. It's like a superpower. So you can take, for example, rockets. You can say, how much should a rocket cost? The approach that people usually take is to look at what rockets have cost historically and then assume that any new rocket must cost about the same as the previous rockets. Whereas the first principles approach is, you look at what materials a rocket is made of. If it's aluminum, copper, carbon fiber, steel, whatever it is, and then say how much does this rocket weigh, what are the elements that make it up of? How much do they weigh? What is the material price per kilogram of those elements? That sets a real bottom line for the cost of a rocket. It can asymptotically approximate the cost of the raw materials.
And then you realize, oh, actually the raw materials of a rocket are only 1% or 2% of the cost of historical rockets. So the manufacturing process must be very inefficient if the raw materials cost only 1% or 2%. That's a first principles analysis of the potential for cost optimization of rockets. And this is before considering reusability. To give an example in AI, I guess last year, when xAI was trying to build a training supercluster, we went to various vendors and said (this was early last year) we need 100,000 H100s (GPUs) to train coherently.
They estimated it would take 18 to 24 months to do this. I said, we need to do it in six months. Otherwise we're not going to be competitive. So then if you break it down, what does it take? You need a building, you need power, you need cooling. We didn't have time to build a building from scratch. So we had to find an existing building. So we found an abandoned plant in Memphis that used to make Electrolux products. But it had an input power of 15 megawatts, and we needed 150 megawatts.
So, we rented generators and put them on one side of the building, and then we needed cooling. So, we rented about a quarter of the mobile cooling capacity in the United States and put the chillers on the other side of the building. This didn't completely solve the problem because the power fluctuations during training were very large. So the power could drop 50% in 100 milliseconds, and the generators couldn't keep up. So we combined it with adding Tesla Megapacks (large battery packs) and modified the software of the Megapacks to be able to smooth out the power fluctuations during training. And then there were a whole bunch of networking challenges. Because if you're trying to get 100,000 GPUs to train coherently, the network cables are very, very challenging.
Garry Tan
...it sounds like for almost anything you mention, I can imagine someone telling you straight up, "No, you can't get that power," "You can't do this." A key point of first principles thinking seems to be that we ask "why," we figure out why, and we challenge the person on the other side. If they give me an answer that I don't like, I won't accept it. Is that right? I think this is especially needed if someone wants to do hardware like you do. In software, we have a lot of redundancy, like "We can just add more CPUs, no problem." But in hardware, it doesn't work.
Musk
I think these universal principles of first principles thinking apply to software and hardware, and to anything. I just used a hardware example of how we were told something was impossible, but once we broke it down into its components — we needed a building, we needed power, we needed cooling, we needed power smoothing — then we could solve those components. But it was… Then we had the network operations team do all the cabling, everything was done 24/7 in four shifts, and I slept in the data center and did the cabling myself.
There are a lot of other problems to solve. You know, last year nobody did a consistent training with 100,000 H100s. Maybe this year somebody did it. I don't know. And then we doubled it to 200,000. So now we have 150,000 H100s, 50,000 H200s, and 30,000 GB200s in our training center in Memphis. We're about to bring online 110,000 GB200s in our second data center in the Memphis area.
Garry Tan
Do you think pre-training still works? Scaling laws still hold? Whoever wins this race will have the biggest, smartest model and can then distill it?
Musk
There are all sorts of other factors besides the competitiveness of large-scale AI. The talent of the people is important for large-scale AI. The scale of the hardware and how effectively you use it is also important. So you can't just order a bunch of GPUs and then you can't just plug them in. So you have to get a lot of GPUs and make them stable for consistent training.
And then, what are your unique sources of data? And I guess distribution is also important to some extent, like how do people get access to your AI? Those are key factors for those large foundation models to be competitive. Like my friend Ilya Sutskever said, I think you know we're almost running out of human-generated data for pre-training, the supply of high-quality tokens is drying up pretty quickly, and then you have to do a lot of you need to essentially create synthetic data and be able to accurately judge the synthetic data you create to verify whether it is real synthetic data or hallucination that is not consistent with the facts. So grounding in reality is tricky, but we are at the stage where we need to invest more energy in synthetic data. Like right now we are training Grok 3.5, which is focused on reasoning.
Garry Tan
To get back to your point about physics, I've heard that the hard sciences, especially physics textbooks, are very useful for reasoning. Researchers tell me that the social sciences are not useful for reasoning at all.
Musk
Yes, that's probably true. So yes, you know, one of the things that's going to be very important in the future is combining deep AI with robotics in data centers or super clusters.
So you know something like the Optimus humanoid robot, yes Optimus is awesome. There will be a lot of humanoids and robots of all shapes and sizes, but my prediction is that humanoids will far outnumber all other robots combined, probably by an order of magnitude, which is a huge difference.
Garry Tan
Rumor has it that you're planning on building a robot army?
Musk
Whether it's us doing it or Tesla doing it, you know, Tesla and xAI work closely together.
Like how many humanoid robotics startups have you seen? Like I think Jensen Huang brought a bunch of robots on stage, from different companies. I think there were about a dozen different humanoid robots. So I mean, I guess, you know, part of what I've been fighting and maybe slowing me down is that I'm kind of I don't want to make Terminator happen, you know. So I kind of, at least until the last few years, procrastinated on AI and humanoid robotics. And then I kind of realized that it's happening whether I do it or not. So, you have two choices. You can either be a spectator or a participant. So, it's like, okay, I'd rather be a participant than a spectator. So now it's just, you know, pedal to the metal on humanoid robotics and uh, digital super intelligence.
Garry Tan
I guess there's a third thing that people have heard you talk about a lot, and I personally agree with it very much, which is becoming a multiplanetary species. How does that fit into the whole? This is not just a 10 or 20-year thing, maybe a 100-year thing, this is about multiple generations of humanity. How do you think about it? There's AI, there's obviously embodied robotics, and then there's becoming a multiplanetary species. Does that all ultimately serve that last point? Or what are you driving right now for the next 10, 20, 100 years?
Musk
Oh my goodness, 100 years, man. I hope civilization is still here in 100 years. If it is, it will be very different from civilization today. I mean, I predict that there will be at least five times as many humanoids as there are humans, maybe 10 times as many. And one way to look at civilizational progress is the percentage of completion on the Kardashev Scale. So if you're, you know, Scale one, you've harnessed all the energy of a planet. Now it seems to me that we're only harnessing 1 or 2 percent of the energy of the Earth. So we're a long way from Kardashev Scale one. And then Scale two is harnessing all the energy of a star. That would be about a billion times, maybe close to a trillion times, the energy of the Earth.
And then Scale three is the energy of the entire galaxy, and we're still a long way from that. So we're in the very, very early stages of the intelligence big bang. I would expect that we're in the very, very early stages of the intelligence big bang, in terms of multi-planet, I think I think within about 30 years, we'll have enough material transferred to Mars to make Mars self-sustaining, to continue to grow and thrive even if the supply ships from Earth stop. This greatly extends the life expectancy of civilization, or consciousness, or intelligence, both biological and digital. So this is why I think it's important to be a multi-planet species.
I'm a little bothered by the Fermi Paradox, like why don't we see any aliens? It could be because intelligence is very rare. Maybe we're the only intelligent life in this galaxy. In that case, conscious intelligence is like a tiny candle in the darkness, and we should do everything we can to make sure that tiny candle doesn't go out, and becoming a multi-planet species or making consciousness multilanetary would greatly increase the life expectancy of civilization, and it's the next step before going to other star systems. Once you have at least two planets, you have a forcing function that drives the advancement of space travel. That will eventually lead to expanding to the stars.
Garry Tan
The Fermi Paradox might suggest that once technology reaches a certain level, civilization will self-destruct. How do we avoid self-destruction? What advice would you give to a room full of engineers? What can we do to prevent this from happening?
Musk
Yes, how to avoid the Great Filters? An obvious Great Filter is global thermonuclear war. So we should try to avoid it.
I want to build benign AI robots, AI that loves people, you know, robots that are helpful. I think it's extremely important in building AI to have a very strict adherence to the truth, even if that truth is politically incorrect. My intuition about what would make AI very dangerous is if you force it to believe things that are not true.
Garry Tan
What do you think about the debate between safety and being closed to gain competitive advantage? I think the great thing is that you have a competitive model and so do others. In that sense, we probably avoided the worst timeline that I was worried about the most, which was a fast takeoff and only one person's hands. That could cause a lot of things to fall apart. Now we have options, which is good. What do you think?
Musk
Yes, I do think there will be several deep intelligences, maybe at least five. There could be as many as 10. I'm not sure if it's going to be hundreds, but probably closer to, say, 10 or so. Probably four of them are in the United States. So I don't think there will be any one AI that has runaway capability. But yes, there will be several deep intelligences.
Garry Tan
What will these deep agents do? Will they do scientific research or try to attack each other?
Musk
Probably both. I mean hopefully they will discover new physics, I think they will certainly invent new technologies. Just like I think I think we are pretty close to digital super intelligence. It may happen this year, if not this year, definitely next year, and digital super intelligence is defined as being smarter than any human at anything.
Garry Tan
So how do we direct that toward super abundance? We can have a robotic workforce, cheap energy, intelligence on demand. Is that the so-called white pill? Where do you fall on that spectrum? What specific things would you encourage everyone in this room to do to make that white pill a reality?
Musk
I think most likely a good outcome. I guess I agree with Jeff Hinton to some extent, maybe 10% to 20% chance of annihilation. But on the bright side, it's 80% to 90% chance of a good outcome. So yes, I can't emphasize it enough. A strict adherence to truth is the most important thing for AI safety. And obviously empathy for humans and life as we know it.
Garry Tan
We haven't talked about Neuralink yet. I'm curious, you're working to close the input/output gap between humans and machines. How critical is this to AGI/ASI (artificial general intelligence/artificial super intelligence)? Once this link is established, will we be able to not only read, but also write?
Musk
Neuralink is not necessary to solve digital superintelligence. It will happen before neural connections are widely used. But neural connections are very effective in solving input output bandwidth constraints. In particular, our output bandwidth is very low. The sustained output of a human in a day is less than one bit per second. So, you know, there are 86,400 seconds in a day. It is extremely rare for a person to output more than that number of symbols in a day. Even more so for several days in a row. So with a neural connection interface, you can greatly increase your output bandwidth and input bandwidth. Input refers to the write operations of the brain.
We now have five humans implanted with that kind of device that reads input, that can read signals. You have people with ALS, who have absolutely no (mobility), they're tetroplegics, but they can now communicate at the same bandwidth as able-bodied people, control their computers and their phones, which is pretty cool. And then I think in the next six to 12 months, we'll have the first implants for vision, even if someone is completely blind, we'll be able to write directly to the visual cortex, we've done it in monkeys.
I think we've had a monkey with a vision device for three years now, and it's going to be relatively low resolution at first, but in the long term it's going to be very high resolution, and see multispectral wavelengths. So you can see infrared, ultraviolet, radar, it's like having super powers. At some point, cybernetic implants will not just be about correcting what's wrong, but about greatly augmenting human capabilities, greatly increasing intelligence, senses, and bandwidth. That's going to happen at some point.
But digital superintelligence will happen long before that, at least if we had a neural connection, we might be able to appreciate AI better. I guess one of the constraints of all your efforts, across all these different fields, is access to the smartest people. Yeah. But, you know, at the same time we have, you know, rocks that can talk and reason, they might have an IQ of 130 right now, and they might be superintelligent soon. How do you reconcile those two things? Like, you know what's going to happen in five, 10 years? What should people in this room do to make sure that, you know, they're creating and not people who might be below the API line?
There's a reason people call it the singularity, because we don't know what's going to happen in the near future. Human intelligence is going to be a very small percentage. At some point, human intelligence will be less than 1% of all intelligence. And if things get to Kardashev Scale level two, we're talking about human intelligence, even assuming significant population growth and intelligence augmentation, like everyone has an IQ of a thousand. Even then, human intelligence is probably only 1 billionth of digital intelligence. Anyway, where is the biological bootloader for digital superintelligence? I guess I'll end here, am I a good bootloader?
Garry Tan
Where do we go from here? How do we go from here? I mean, all of this is pretty wild science fiction, but it could also be built by you guys. You know, if you had what closing words for this generation of the brightest minds in technology? What should they be doing? What should they be working on? What should they be thinking about, what should they be thinking about when they go to dinner tonight?
Musk
Like I said at the beginning, I think if you're doing something useful, that's great. If you're just trying to be as useful as possible to your fellow man, then you're doing good. I keep saying this, focus on super truthful AI, that's the most important thing for AI safety. You know, obviously if you know of anyone who's interested in working on xAI, I mean, please, please let us know. Our goal is to make Grok the most maximally truth seeking AI. I think that's really important. Hopefully, we can understand the nature of the universe. That's probably what AI can tell us. Maybe AI can tell us where the aliens are and you know, how did the universe really begin? How will it end? What are the questions we don't know we should ask? Are we in a simulation? Or at what level of simulation are we in?
Garry Tan
I guess we'll find out. An NPC (non-player character). Elon, thank you so much for joining us.