Camouflaged Tesla Robotaxi Prototype Spotted at Warner Bros. Studio

By Karan Singh
Not a Tesla App

The first Robotaxi prototype has been spotted, with a hefty layer of camouflage over its body – at the Warner Bros. Studios, where the Robotaxi event is expected to occur. This news comes right after Tesla recently announced that a stock draw was going to take place for Robotaxi event tickets.

Tesla has already begun to gather data in the WB Studio area using its Robotaxi mules – but this is the first actual Robotaxi prototype we’ve seen. The mule vehicles have been Model 3s with odd-shaped camera mounts in the rear windows – which appear to approximately line up with the shape of the new vehicle that’s rolling around the studio streets today.

Robotaxi Prototype

The prototype itself appears to be roughly the same size and shape as the Robotaxi Concepts that we’ve seen, along with a box in the back to maybe make its shape harder to discern. Of course, the Tesla-like headlights, aero caps, and curves are fairly obvious to the seasoned observer, and it looks like a compact Model Y.

Most interestingly, it seems that Tesla has camouflaged this vehicle to try and prevent people from discerning what it is – but astute Tesla fans on Reddit have noted seeing this vehicle and several others wrapped in eye-catching bright yellow, along with fake body panels.

@philroberts

It seems the prototypes have two doors and steering wheels, at least currently. In a now-deleted comment, a Reddit user also mentioned they caught sight of a straight and solid rear lightbar, similar to the Cybertruck—and perhaps the leaked Model Y Juniper that was seen last month.

Unlike the Cybertruck event – where the “unbreakable” windows shattered on stage, Tesla appears to be putting a lot of effort into preparing for this event and making sure everything goes right. It definitely feels like attendees will get to experience calling for a Robotaxi and getting a ride at the Warner Bros. studio, which will make this unlike any other Tesla event before.'

We’re looking forward to seeing all the cool things Tesla will reveal at the upcoming Robotaxi event – which is on October 10th. If you haven’t already signed up to be included in the random drawing for tickets, be sure to sign up soon, as entries for the drawing close on Tuesday, September 17th!

Ordering a New Tesla?

Consider using our referral code (karan29050) to get up to $2,000 off your new Tesla and get 3 Months of FSD for free.

Nvidia’s Cosmos Offers Synthetic Training Data; Following Tesla’s Lead

By Karan Singh
Not a Tesla App

At the 2025 Consumer Electronics Show, Nvidia showed off its new consumer graphics cards, home-scale compute machines, and commercial AI offerings. One of these offerings included the new Nvidia Cosmos training system.

Nvidia is a close partner of Tesla - in fact, they produce and supply the GPUs that Tesla uses to train FSD - the H100s and soon-to-be H200s, located at the new Cortex Supercomputing Cluster at Giga Texas. Nvidia will also challenge Tesla’s lead in developing and deploying synthetic training data for an autonomous driving system - something Tesla is already doing.

However, this is far more important for other manufacturers. We’re going to take a look at what Nvidia is offering and how it compares to what Tesla is already doing. We’ve done a few deep dives into how Tesla’s FSD works, how Tesla streamlines FSD, and, more recently, how they optimize FSD. If you want to get familiar with a bit of the lingo and the background knowledge, we recommend reading those articles before continuing, but we’ll do our best to explain how all this synthetic data works.

Nvidia Cosmos

Nvidia’s Cosmos is a generative AI model created to accelerate the development of physical AI systems, including robots and autonomous vehicles. Remember - Tesla’s FSD is also the same software that powers their humanoid robot, Optimus. Nvidia is aiming to tackle physical, real-world deployments of AI anywhere from your home, your street, or your workplace, just like Tesla.

Cosmos is a physics-aware engine that learns from real-world video and builds simulated video inputs. It tokenizes data to help AI systems learn quicker, all based on the video that is input into the system. Sound familiar? That’s exactly how FSD learns as well.

Cosmos also has the capability to do sensor-fused simulations. That means it can take multiple input sources - video, LiDAR, audio, or whatever else the user intends, and fuse them together into a single-world simulation for your AI model to learn from. This helps train, test, and validate autonomous vehicle behavior in a safe, synthetic format while also providing a massive breadth of data.

Data Scaling

Of course, Cosmos itself still requires video input - the more video you feed it, the more simulations it can generate and run. Data scaling is a necessity for AI applications, as you’ll need to feed it an infinite amount of data to build an infinite amount of scenarios for it to train itself on.

Synthetic data also has a problem - is it real? Can it predict real-world situations? In early 2024, Elon Musk commented on this problem, noting that data scales infinitely both in the real world and in simulated data. A better way to gather testing data is through real-world data. After all, no AI can predict the real world just yet - in fact, that’s an excellent quantum computing problem that the brightest minds are working on.

Yun-Ta Tsai, an engineer at Tesla’s AI team, also mentioned that writing code or generating scenarios doesn’t cover what even the wildest AI hallucinations might come up with. There are lots of optical phenomena and real-world situations that don’t necessarily make sense in the rigid training sets that AI would develop, so real-world data is absolutely essential to build a system that can actually train a useful real-world AI.

Tesla has billions of miles of real-world video that can be used for training, according to Tesla’s Social Media Team Lead Viv. This much data is essential because even today, FSD encounters “edge cases” that can confuse it, slow it down, or render it incapable of continuing, throwing up the dreaded red hands telling the user to take over.

Cosmos was trained on approximately 20 million hours of footage, including human activities like walking and manipulating objects. On the other hand, Tesla’s fleet gathers approximately 2,380 recorded minutes of real-world video per minute. Every 140 hours - just shy of 6 days - Tesla’s fleet gathers 20 million hours of footage. That was a little bit of back-of-the-napkin math, calculated at 60 mph as the average speed.

Generative Worlds

Both Tesla’s FSD and Nvidia’s Cosmos can generate highly realistic, physics-based worlds. These worlds are life-like environments and simulate the movement of people and traffic and the real-life position of obstacles and objects, including curbs, fences, buildings, and other objects.

Tesla uses a combination of real-world data and synthetic data, but the combination of data is heavily weighted to real-world data. Meanwhile, companies who use Cosmos will be weighting their data heavily towards synthetically created situations, drastically limiting what kind of cases they may see in their training datasets.

As such, while generative worlds may be useful to validate an AI quickly, we would argue that these worlds aren’t as useful as real-world data to do the training of an AI.

Overall, Cosmos is an exciting step - others are clearly following in Tesla’s footsteps, but they’re extremely far behind in real-world data. Tesla has built a massive first-mover advantage in AI and autonomy, and others are now playing catch-up.

We’re excited to see how Tesla’s future deployment of its Dojo Supercomputer for Data Labelling adds to its pre-existing lead, and how Cortex will be able to expand, as well as what competitors are going to be bringing to the table. After all, competition breeds innovation - and that’s how Tesla innovated in the EV space to begin with.

Tesla Releases FSD V12.6.1 for Model 3 & Model Y

By Karan Singh
Not a Tesla App

Last night, Tesla released software update 2024.45.25.15, which includes FSD V12.6.1. This update adds support for all HW3 vehicles, including the Model 3 and Model Y. We’re excited to see the continued support for HW3 owners. 

FSD V12.6.1

V12.6.1 is now going wide, according to Ashok Elluswamy, Tesla’s VP of AI. This update is going to the Model 3 and Model Y for the first time - as only the Model S and Model X were included in FSD V12.6. 

V12.6 is a big step forward for HW3 - it includes End-to-End on Highway, Improved City Streets Behavior, and Smoother and More Accurate Tracking - all contributing towards a better, smoother, and more comfortable build of FSD. You can read our comparison between FSD V12.6 and V13.2.2 here

In short, FSD V12.6 performs considerably closer to V13 than V12.5.4.2 - which is a massive improvement. It performs as well as the Cybertruck version of FSD V13, which is still missing a few features when compared to other HW4 vehicles, but it’s a great sign for HW3. A lot of the improvements can be pointed to in the improvements to lane selection and decision-making - the vehicle tends to hesitate far less on V12.6, meaning the ride is a lot smoother. Many early V12.6 testers mentioned that it felt more like V13-mini than anything else.

Legacy Model S & X

We haven’t seen this update hit any legacy Model S and Model X vehicles just yet. We’re not sure whether Ashok’s statement of “generally” applies here - but it should. If you do get the update, please let us know.

Legacy Model S and Model X vehicles are still on an older FSD build and potentially won’t see another FSD update for a little while longer. While they do have the same FSD hardware as other vehicles, there are enough hardware differences that require a build specifically for these vehicles.

Release Date

Update 2024.45.25.15

FSD Supervised 12.6.1 & 13.2.4
Installed on 0.5% of fleet
11 Installs today
Last updated: Jan 13, 3:10 pm UTC

FSD V12.6.1 is going out now to the redesigned Model S and X with HW3 and all Model 3 and Model Y vehicles with HW3. The initial wave went out last night, and we expect to see more later today or tomorrow. If this release ends up going “wide,” we should see much larger waves go out next week.


Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter