We have been hearing about Tesla’s Robotaxi concept for several years, but it seems like we may finally be getting close to this vehicle becoming a reality. Here is everything we know about the Robotaxi.
Official Reveal
Yesterday, Musk officially announced on X that Tesla would unveil the Tesla Robotaxi on August 8th, 2024. Tesla last unveiled a new vehicle back in November 2019 when they showed off the Cybertruck for the first time. Before that, they unveiled the Roadster 2.0 and the Tesla Semi at the same event in 2017, so these certainly special times that only come across once every few years.
While it's always possible that Tesla may have to move the Robotaxi's unveil date, it's exciting to think that Tesla may be just four months from unveiling this next-gen vehicle.
Robotaxi and Next-gen Vehicle
Another piece of information came out about the Robotaxi yesterday when Musk reply to the post by Sawyer Merritt. Sawyer posted that Tesla's upcoming "$25k" vehicle and the Robotaxi would not only be based on the same platform, but that the Robotaxi would essentially be the same vehicle without a steering wheel. Musk replied to the post with a simple "looking" emoji.
While it's not surprising that two of Tesla's smaller upcoming vehicles are going to be built on the same platform, it's a little more interesting that Musk chose to reply with that emoji when the post talks about the Robotaxi being the "Model 2" without a steering wheel. This leads to the possibility of Tesla not only showing off the Robotaxi at the August 8th event, but also it's upcoming next-gen car.
Production Date
Back during Tesla's Q1 2022 earnings call, Musk talked a little about the timeline for Tesla's Robotaxi, stating that they plan to announce the vehicle in 2023 and begin mass production in 2024.
Given that Tesla was originally aiming for a 2023 unveil, a late 2024 date appears realistic. However, now it appears that the Robotaxi and the next-gen vehicle will share a lot in common, meaning that a production date for the Robotaxi can be similar to the next-gen vehicle, which is currently slated to begin in "late 2025".
The difficulty in releasing an autonomous taxi, as the Robotaxi is meant to be, is the self-driving aspect. While Tesla has made great strides with FSD v12, the first version to come out of "beta," it's still a level-2 system that requires active driver supervision. A fully autonomous vehicle is still a big leap from where Tesla's FSD is right now, but as we saw with the jump from FSD v11 to v12, a lot can change in the next 18 to 24 months.
While we expect Tesla to remain focused on bringing its cheaper, next-gen vehicle to market ahead of potential competitors, the Robotaxi's production date can continue to shift in line with Tesla's progress on FSD.
The history of Tesla’s Robotaxi starts with CEO Elon Musk's Master Plan Part Deux, published in 2016.
At the time the concept was touted as normal Teslas with full self-driving (FSD) capability.
Once Tesla achieved Full Self-Driving, they would create a “Tesla Network” taxi service that would make use of both Tesla-owned vehicles and customer cars that would be hired out when not in use.
Once we get to a world of "robotaxis," it makes sense to continue evolving the interior of the vehicle to suit customer needs such as adding face-to-face seating, big sliding doors providing easy access, 4-wheel steering, easier cleaning, etc.
Tesla could even create a variety of Robotaxis that help meet specific needs. For example, Tesla could offer a vehicle that is better suited for resting, which could let you sleep on the way to your destination.
Another vehicle could be similar to a home office, offering multiple monitors and accessories that let you begin working as soon as you step inside the vehicle. Features such as these could bring huge quality of life improvements for some; giving people an hour or more back in their day.
The variety of Robotaxis doesn't need to end there. There could be other vehicles that are made specifically for entertainment such as watching a movie, or others that allow you to relax and converse with friends, much like you'd expect in a limousine.
Lowest Cost Per Mile
During Tesla's Q1 2022 financial results call, Musk stated that its robotaxi would be focused on cost per mile, and would be highly optimized for autonomy - essentially confirming that it will not include a steering wheel.
“There are a number of other innovations around it that I think are quite exciting, but it is fundamentally optimized to achieve the lowest fully considered cost per mile or km when counting everything”, he said.
During the call, Tesla acknowledged that its vehicles are largely inaccessible for many people given their high cost and he sees the introduction of Robotaxis as a way of providing customers with “by far the lowest cost-per-mile of transport that they’ve ever experienced. The CEO believes that the vehicle is going to result in a cost per mile cheaper than a subsidized bus ticket. If Tesla can achieve this, it could drastically change the entire automotive industry and redefine car ownership. Is Tesla's future still in selling vehicles or providing a robotaxi service?
FSD Sensor Suite
Tesla hasn't revealed anything about the sensor suite that they're considering for the robotaxi, but given all of their work in vision and progress in FSD, it's expected to be the same or similar to what is available today, potentially with additional cameras or faster processing.
However, back in 2022, Musk gave this warning: “With respect to full self-driving, of any technology development I’ve been involved in, I’ve never really seen more false dawns or where it seems like we’re going to break through, but we don’t, as I’ve seen in full self-driving,” said Musk. “And ultimately what it comes down to is that to sell full self-driving, you actually have to solve real-world artificial intelligence, which nobody has solved. The whole road system is made for biological neural nets and eyes. And so actually, when you think about it, in order to solve driving, we have to solve neural nets and cameras to a degree of capability that is on par with, or really exceeds humans. And I think we will achieve that this year.”
With the Robotaxi unveil now approaching, it may not be long before we find out more details about Tesla's plan for the future and its truly autonomous vehicles.
Tesla launched two FSD updates simultaneously on Saturday night, and what’s most interesting is that they arrived on the same software version. We’ll dig into that a little later, but for now, there’s good news for everyone. For Hardware 3 owners, FSD V12.6.1 is launching to all vehicles, including the Model 3 and Model Y. For AI4 owners, FSD V13.2.4 is launching, starting with the Cybertruck.
FSD V13.2.4
A new V13 build is now rolling out to the Cybertruck and is expected to arrive for the rest of the AI4 fleet soon. However, this build seems to be focused on bug fixes. There are no changes to the release notes for the Cybertruck with this release, and it’s unlikely to feature any changes when it arrives on other vehicles.
FSD V12.6.1 builds upon V12.6, which is the latest FSD version for HW3 vehicles. While FSD V12.6 was only released for the redesigned Model S and Model X with HW3, FSD V12.6.1 is adding support for the Model 3 and Model Y.
While this is only a bug-fix release for users coming from FSD V12.6, it includes massive improvements for anyone coming from an older FSD version. Two of the biggest changes are the new end-to-end highway stack that now utilizes FSD V12 for highway driving and a redesigned controller that allows FSD to drive “V13” smooth.
It also adds speed profiles, earlier lane changes, and more. You can read our in-depth look at all the changes in FSD V12.6.
Same Update, Multiple FSD Builds
What’s interesting about this software version is that it “includes" two FSD updates, V12.6.1 for HW3 and V13.2.4 for HW4 vehicles. While this is interesting, it’s less special when you understand what’s happening under the hood.
The vehicle’s firmware and Autopilot firmware are actually completely separate. While a vehicle downloading a firmware update may look like a singular process, it’s actually performing several functions during this period. First, it downloads the vehicle’s firmware. Upon unpacking the update, it’s instructed which Autopilot/FSD firmware should be downloaded.
While the FSD firmware is separate, the vehicle can’t download any FSD update. The FSD version is hard-coded in the vehicle’s firmware that was just downloaded. This helps Tesla keep the infotainment and Autopilot firmware tightly coupled, leading to fewer issues.
What we’re seeing here is that HW3 vehicles are being told to download one FSD version, while HW4 vehicles are being told to download a different version.
While this is the first time Tesla has had two FSD versions tied to the same vehicle software version, the process hasn’t actually changed, and what we’re seeing won’t lead to faster FSD updates or the ability to download FSD separately. What we’re seeing is the direct result of the divergence of HW3 and HW4.
While HW3/4 remained basically on the same FSD version until recently, it is now necessary to deploy different versions for the two platforms. We expect this to be the norm going forward, where HW3 will be on a much different version of FSD than HW4. While each update may not include two different FSD versions going forward, we may see it occasionally, depending on which features Autopilot is dependent on.
Thanks to Greentheonly for helping us understand what happened with this release and for the insight into Tesla’s processes.
At the 2025 Consumer Electronics Show, Nvidia showed off its new consumer graphics cards, home-scale compute machines, and commercial AI offerings. One of these offerings included the new Nvidia Cosmos training system.
Nvidia is a close partner of Tesla - in fact, they produce and supply the GPUs that Tesla uses to train FSD - the H100s and soon-to-be H200s, located at the new Cortex Supercomputing Cluster at Giga Texas. Nvidia will also challenge Tesla’s lead in developing and deploying synthetic training data for an autonomous driving system - something Tesla is already doing.
However, this is far more important for other manufacturers. We’re going to take a look at what Nvidia is offering and how it compares to what Tesla is already doing. We’ve done a few deep dives into how Tesla’s FSD works, how Tesla streamlines FSD, and, more recently, how they optimize FSD. If you want to get familiar with a bit of the lingo and the background knowledge, we recommend reading those articles before continuing, but we’ll do our best to explain how all this synthetic data works.
Nvidia Cosmos
Nvidia’s Cosmos is a generative AI model created to accelerate the development of physical AI systems, including robots and autonomous vehicles. Remember - Tesla’s FSD is also the same software that powers their humanoid robot, Optimus. Nvidia is aiming to tackle physical, real-world deployments of AI anywhere from your home, your street, or your workplace, just like Tesla.
Cosmos is a physics-aware engine that learns from real-world video and builds simulated video inputs. It tokenizes data to help AI systems learn quicker, all based on the video that is input into the system. Sound familiar? That’s exactly how FSD learns as well.
Cosmos also has the capability to do sensor-fused simulations. That means it can take multiple input sources - video, LiDAR, audio, or whatever else the user intends, and fuse them together into a single-world simulation for your AI model to learn from. This helps train, test, and validate autonomous vehicle behavior in a safe, synthetic format while also providing a massive breadth of data.
Data Scaling
Of course, Cosmos itself still requires video input - the more video you feed it, the more simulations it can generate and run. Data scaling is a necessity for AI applications, as you’ll need to feed it an infinite amount of data to build an infinite amount of scenarios for it to train itself on.
Synthetic data also has a problem - is it real? Can it predict real-world situations? In early 2024, Elon Musk commented on this problem, noting that data scales infinitely both in the real world and in simulated data. A better way to gather testing data is through real-world data. After all, no AI can predict the real world just yet - in fact, that’s an excellent quantum computing problem that the brightest minds are working on.
Yun-Ta Tsai, an engineer at Tesla’s AI team, also mentioned that writing code or generating scenarios doesn’t cover what even the wildest AI hallucinations might come up with. There are lots of optical phenomena and real-world situations that don’t necessarily make sense in the rigid training sets that AI would develop, so real-world data is absolutely essential to build a system that can actually train a useful real-world AI.
Tesla has billions of miles of real-world video that can be used for training, according to Tesla’s Social Media Team Lead Viv. This much data is essential because even today, FSD encounters “edge cases” that can confuse it, slow it down, or render it incapable of continuing, throwing up the dreaded red hands telling the user to take over.
Cosmos was trained on approximately 20 million hours of footage, including human activities like walking and manipulating objects. On the other hand, Tesla’s fleet gathers approximately 2,380 recorded minutes of real-world video per minute. Every 140 hours - just shy of 6 days - Tesla’s fleet gathers 20 million hours of footage. That was a little bit of back-of-the-napkin math, calculated at 60 mph as the average speed.
Generative Worlds
Both Tesla’s FSD and Nvidia’s Cosmos can generate highly realistic, physics-based worlds. These worlds are life-like environments and simulate the movement of people and traffic and the real-life position of obstacles and objects, including curbs, fences, buildings, and other objects.
Tesla uses a combination of real-world data and synthetic data, but the combination of data is heavily weighted to real-world data. Meanwhile, companies who use Cosmos will be weighting their data heavily towards synthetically created situations, drastically limiting what kind of cases they may see in their training datasets.
As such, while generative worlds may be useful to validate an AI quickly, we would argue that these worlds aren’t as useful as real-world data to do the training of an AI.
Overall, Cosmos is an exciting step - others are clearly following in Tesla’s footsteps, but they’re extremely far behind in real-world data. Tesla has built a massive first-mover advantage in AI and autonomy, and others are now playing catch-up.
We’re excited to see how Tesla’s future deployment of its Dojo Supercomputer for Data Labelling adds to its pre-existing lead, and how Cortex will be able to expand, as well as what competitors are going to be bringing to the table. After all, competition breeds innovation - and that’s how Tesla innovated in the EV space to begin with.