At the 2025 Consumer Electronics Show, Nvidia showed off its new consumer graphics cards, home-scale compute machines, and commercial AI offerings. One of these offerings included the new Nvidia Cosmos training system.
Nvidia is a close partner of Tesla - in fact, they produce and supply the GPUs that Tesla uses to train FSD - the H100s and soon-to-be H200s, located at the new Cortex Supercomputing Cluster at Giga Texas. Nvidia will also challenge Tesla’s lead in developing and deploying synthetic training data for an autonomous driving system - something Tesla is already doing.
However, this is far more important for other manufacturers. We’re going to take a look at what Nvidia is offering and how it compares to what Tesla is already doing. We’ve done a few deep dives into how Tesla’s FSD works, how Tesla streamlines FSD, and, more recently, how they optimize FSD. If you want to get familiar with a bit of the lingo and the background knowledge, we recommend reading those articles before continuing, but we’ll do our best to explain how all this synthetic data works.
Nvidia Cosmos
Nvidia’s Cosmos is a generative AI model created to accelerate the development of physical AI systems, including robots and autonomous vehicles. Remember - Tesla’s FSD is also the same software that powers their humanoid robot, Optimus. Nvidia is aiming to tackle physical, real-world deployments of AI anywhere from your home, your street, or your workplace, just like Tesla.
Cosmos is a physics-aware engine that learns from real-world video and builds simulated video inputs. It tokenizes data to help AI systems learn quicker, all based on the video that is input into the system. Sound familiar? That’s exactly how FSD learns as well.
Cosmos also has the capability to do sensor-fused simulations. That means it can take multiple input sources - video, LiDAR, audio, or whatever else the user intends, and fuse them together into a single-world simulation for your AI model to learn from. This helps train, test, and validate autonomous vehicle behavior in a safe, synthetic format while also providing a massive breadth of data.
Data Scaling
Of course, Cosmos itself still requires video input - the more video you feed it, the more simulations it can generate and run. Data scaling is a necessity for AI applications, as you’ll need to feed it an infinite amount of data to build an infinite amount of scenarios for it to train itself on.
Synthetic data also has a problem - is it real? Can it predict real-world situations? In early 2024, Elon Musk commented on this problem, noting that data scales infinitely both in the real world and in simulated data. A better way to gather testing data is through real-world data. After all, no AI can predict the real world just yet - in fact, that’s an excellent quantum computing problem that the brightest minds are working on.
Yun-Ta Tsai, an engineer at Tesla’s AI team, also mentioned that writing code or generating scenarios doesn’t cover what even the wildest AI hallucinations might come up with. There are lots of optical phenomena and real-world situations that don’t necessarily make sense in the rigid training sets that AI would develop, so real-world data is absolutely essential to build a system that can actually train a useful real-world AI.
Tesla has billions of miles of real-world video that can be used for training, according to Tesla’s Social Media Team Lead Viv. This much data is essential because even today, FSD encounters “edge cases” that can confuse it, slow it down, or render it incapable of continuing, throwing up the dreaded red hands telling the user to take over.
Cosmos was trained on approximately 20 million hours of footage, including human activities like walking and manipulating objects. On the other hand, Tesla’s fleet gathers approximately 2,380 recorded minutes of real-world video per minute. Every 140 hours - just shy of 6 days - Tesla’s fleet gathers 20 million hours of footage. That was a little bit of back-of-the-napkin math, calculated at 60 mph as the average speed.
Generative Worlds
Both Tesla’s FSD and Nvidia’s Cosmos can generate highly realistic, physics-based worlds. These worlds are life-like environments and simulate the movement of people and traffic and the real-life position of obstacles and objects, including curbs, fences, buildings, and other objects.
Tesla uses a combination of real-world data and synthetic data, but the combination of data is heavily weighted to real-world data. Meanwhile, companies who use Cosmos will be weighting their data heavily towards synthetically created situations, drastically limiting what kind of cases they may see in their training datasets.
As such, while generative worlds may be useful to validate an AI quickly, we would argue that these worlds aren’t as useful as real-world data to do the training of an AI.
Overall, Cosmos is an exciting step - others are clearly following in Tesla’s footsteps, but they’re extremely far behind in real-world data. Tesla has built a massive first-mover advantage in AI and autonomy, and others are now playing catch-up.
We’re excited to see how Tesla’s future deployment of its Dojo Supercomputer for Data Labelling adds to its pre-existing lead, and how Cortex will be able to expand, as well as what competitors are going to be bringing to the table. After all, competition breeds innovation - and that’s how Tesla innovated in the EV space to begin with.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Tesla has released software update 2025.2.6, and while minor updates typically focus on bug fixes, this one introduces a major new feature. With this update, Tesla has activated the in-cabin radar, a sensor that has been included in some vehicles for more than three years but remained unused until now.
Why Not Vision?
Unlike vision-based systems, radar can precisely measure object dimensions and even detect movement behind obstacles by bouncing radio waves off surrounding surfaces. This allows for more accurate and reliable measurements of objects that vision may not even be able to see, such as behind the front seats.
What Tesla Announced
Tesla recently highlighted the 4D radar in the new Model Y, explaining how it will improve passenger safety. Tesla executives stated that the radar would be used to properly classify passengers and improve the way airbags deploy.
Tesla went on to say that in a future update, Tesla will use the in-cabin radar to detect any potential passengers left in the vehicles. Since radar can even pick up on heartbeat and breathing patterns, it can provide a much more accurate method of detecting children left in a vehicle. Tesla talked about how the vehicle will send owners a notification via the Tesla app and enable the HVAC system if it detects a passenger in the vehicle. It’ll even call emergency services if needed.
New Feature in Update 2025.2.6
Tesla has officially named this feature in update 2025.2.6, “First-Row Cabin Sensing Update,” which appears to align with the first portion of what Tesla discussed in the new Model Y video.
In the release notes, Tesla describes the update as:
“The first-row cabin sensing system has been updated to use cabin radar, which is now standard in all new 2025 Model Ys. Your Model Y was built pre-equipped with the necessary hardware, allowing Tesla to also bring this technology to your vehicle.”
For now, it appears that Tesla is using the radar to detect and classify passengers in the front seats. This could eventually replace traditional seat sensors, reducing the number of hardware components and lowering production costs.
Tesla plans to expand the feature later this year, bringing rear-seat passenger detection in Q3 2025. While Tesla talked about the feature for the new Model Y, we expect it to be available for all vehicles with the in-cabin radar.
Supported Models
Although Tesla is vague in their release notes, this feature is being added to all Model Ys that include a cabin radar. Tesla started including the cabin radar in 2022, but its availability may vary by region and model. The Model 3 didn’t receive the cabin radar until it was redesigned in 2024, while all Cybertrucks already include it.
The owner’s manual for the redesigned Model S and Model X doesn’t specifically mention the interior radar, although Greentheonly believes the vehicles also include one, so we’ll have to wait to determine whether those vehicles also receive this new feature.
At this time, the feature appears to be only going out to Model Y vehicles, but we expect it to become available on other supported models soon.
We love to see these kinds of updates. Tesla is increasing the safety of existing and new vehicles through a software update while also making them more affordable to own.
Tesla has updated the Tesla app to version 4.42.0, and this time, it’s more than just bug fixes. The app includes a new service interface, introduces support for the new Model Y, and, for the first time, includes some code for the Robotaxi coming later this year.
This update was released for iOS and should be available on Android within a few days.
Refreshed Model Y 3D Model
First up in the update is the introduction of the 3D model for the refreshed Model Y. Interestingly, while we all know it as Juniper, the file code name inside the update lists the vehicle as “Bayberry.” The Bayberry name was introduced in Tesla app update 4.41.5. Tesla’s internal code names sometimes change as the vehicle evolves - and we’ll continue to refer to it as the refreshed or new Model Y for ease of understanding.
A rear-angle shot of the Refreshed Model Y from the Tesla App
@olympusdev_ on X
As usual with Tesla’s 3D models in the app, there’s a lot of detail, although it’s not easy to see since you can pinch and zoom the model in the app. The 3D models used in the app are actually the same models that Tesla uses in the vehicle, although sometimes they include different lighting effects, but they’re all highly detailed.
Robotaxi API
Tesla has added a new endpoint in their app for Robotaxi - and it’s the very first Robotaxi or Cybercab-related item we’ve seen in the app. With the Robotaxi fleet launching in June, according to Tesla, it looks like they’re now adding support to the Tesla app.
What the Robotaxi interface is supposed to look like in the future.
Not a Tesla App
The new app API is called “rides_feedback_upload,” which seems pretty explanatory. Tesla will need to gather a lot of information on ride quality and all the little things in between. What better way than to get feedback directly from users?
While Tesla previously released prototype images of what the Robotaxi app will look like, the introduction of this API into the Tesla app leads us to believe that Tesla will utilize the current app for Robotaxi use.
Updated Service Interface
The Updated Service Panel in the Tesla App
Not a Tesla App
Tesla has released an updated UI for the Tesla Service panel, and we have a ton of details on these changes. This new pane displays appointment details more prominently. If you have a service appointment scheduled, you’ll now see a lot more details on the main service screen. The app will now display:
Your current service status
Appointment date and time, which you can now tap on to add the event to your calendar
Address and hours of the service center. You can now also tap on the address to open up the location in your maps app
There’s also a new appointment details screen (the right portion of the image). This screen displays additional details that were previously unavailable, such as your transport type. The app will display whether you’ll get a loaner vehicle, demo vehicle, or something else.
There are a ton of user experience (UX) improvements in this update regarding service, including clearer language, improved UI fixes to images, and more.
Tesla has been making a lot of positive updates to the Service-related sections of the app lately, and we’re happy to see these coming rapid-fire. Tesla Service is now easier to use and understand. In the previous app update, Tesla also added the ability to pull down to update the service screens.