How Tesla’s FSD Works - Part 2

By Karan Singh
Not a Tesla App

We previously dived into how FSD works based on Tesla’s patents back in November, and Tesla has recently filed for additional patents regarding its training of FSD.

This particular patent is titled “Predicting Three-Dimensional Features for Autonomous Driving” - and it’s all about using Tesla Vision to establish a ground truth - which enables the rest of FSD to make decisions and navigate through the environment. 

This patent essentially explains how FSD can generate a model of the environment around it and then analyze that information to create predictions.

Time Series

Creating a sequence of data over time - a Time Series - is the basis for how FSD understands the environment. Tesla Vision, in combination with the internal vehicle sensors (for speed, acceleration, position, etc.,) establishes data points over time. These data points come together to create the time series.

By analyzing that time series, the system establishes a “ground truth” - a highly accurate and precise representation of the road, its features, and what is around the vehicle. For example, FSD may observe a lane line from multiple angles and distances as the vehicle moves through time, allowing it to determine the line’s precise 3D shape in the world. This system helps FSD to maintain a coherent truth as it moves forward - and allows it to establish the location of things in space around it, even if they were initially hidden or unclear.

Author’s Note

Interestingly, Tesla’s patent actually mentions the use of sensors other than Tesla Vision. It goes on to mention radar, LiDAR, and ultrasonic sensors. While Tesla doesn’t use radar (despite HD radars being on the current Model S and Model X) or ultrasonic sensors anymore, it does use LiDAR for training.

However, this LIDAR use is for establishing accurate sensor data for FSD - for training purposes. No Tesla vehicle is actually shipped with any LiDAR sensors. You can read about Tesla’s use for its LIDAR training rigs here.

Associating the Ground Truth

Once the ground truth is established, it is linked to specific points in time within the time series - usually a single image or the amalgamation of a set of images. This association is critical - it allows the system to predict the complete 3D structure of the environment from just that single snapshot. In addition, they also serve as a learning tool to help FSD understand the environment around it.

Imagine FSD has figured out the exact curve of a lane line using data from the time series. Next, it connects this knowledge to the particular image in the sequence where the lane line was visible. Next, it applies what it has learned - the exact curve, and the image sequence and data - to predict the 3D shape of the line going forward - even if it may not know for sure what the line may look like in the future.

Author’s Note

This isn’t part of the patent, but when you combine that predictive knowledge with precise and effective map data, that means that FSD can better understand the lay of the road and plan its maneuvers ahead of time. We do know that FSD takes into account mapping information. However, live road information from the ground truth is taken as the priority - mapping is just context, after all.

That is why when roads are incorrectly mapped, such as the installation of a roundabout in a location where a 4-way stop previously existed, FSD is still capable of traversing the intersection.

Three Dimensional Features

Representing features that the system picks up in 3D is essential, too. This means that the lane lines, to continue our previous example, must move up and down, left and right, and through time. This 3D understanding is vital for accurate navigation and path planning, especially on roads with curves, hills, or any varying terrain.

Automated Training Data Generation

One of the major advantages of this entire 3D system is that it generates training data automatically. As the vehicle drives, it collects sensor data and creates time series associated with ground truths.

Tesla does exactly this when it uploads data from your vehicle and analyzes it with its supercomputers. The machine learning model uses all the information it gets to better improve its prediction capabilities. This is now becoming a more automated process, as Tesla is moving away from the need to manually label data and is instead automatically labeling data with AI.

Semantic Labelling

The patent also discusses the use of semantic labeling - a topic covered in our AI Labelling Patent. However, a quick nitty-gritty is that Tesla labels lane lines as “left lane” or “right lane,” depending on the 3D environment that is generated through the time series.

On top of that, vehicles and other objects can also be labelled, such as “merging” or “cutting in.” All of these automatically applied labels help FSD to prioritize how it will analyze information and what it expects the environment around it to do.

How and When Tesla Uploads Data

Tesla’s data upload isn’t just everything they may catch - even though they did draw an absolutely astounding 1.28 TB from the author’s Cybertruck once it received FSD V13.2.2. It is based on transmitting selective sensor information based on triggers. These triggers can include incorrect predictions, user interventions, or failures to correctly conduct path planning. 

Tesla can also request all data from certain vehicles based on the vehicle type and the location - hence the request for the absurd 1.28 TB coming from one of the first Canadian Cybertrucks. This allows Tesla to collect data from specific driving scenarios - which it needs to help build better models that are more adaptable to more circumstances while also keeping data collection focused, thereby making training more efficient.

How It Works

To wrap that all up, the model applies predictions to better navigate through the environment. It uses data collected through time and then encapsulated in a 3D environment around the vehicle. Using that 3D environment, Tesla’s FSD formulates predictions on what the environment ahead of it will look like.

This process provides a good portion of the context that is needed for FSD to actually make decisions. But there are quite a few more layers to the onion that is FSD.

Adding in Other Layers

The rest of the decision-making process lies in understanding moving and static objects on the road, as well as identifying and reducing risk to vulnerable road users. Tesla’s 3D mapping also identifies and predicts the pathing of other moving objects, which enables it to conduct its path planning. While this isn’t part of this particular patent per-say, it is still an essential element to the entire system.

If all that technical information is interesting to you, we recommend you check out the rest of our series on Tesla’s patents:

We’ll continue to dive deep into Tesla’s patents, as they provide a unique and interesting source to explain how FSD actually works behind the curtains. It’s an excellent chance to get a peek behind the silicon brains that make the decisions in your car, as well as a chance to see how Tesla’s engineers actually structure FSD.

Tesla Robotaxi to Expand Service Area / Geofence This Weekend

By Karan Singh
Tesla's Robotaxi initial service area
Tesla's Robotaxi initial service area
Not a Tesla App

Last night on X, Elon Musk confirmed that Tesla will be expanding the service area for its Robotaxi Network pilot in Austin, Texas, this coming weekend. This is the first official confirmation we’ve had of a date for expansion, following news that Tesla is hiring more Vehicle Operators and plans to expand the Robotaxi supervisor ratio in the coming months.

This is a sign of Tesla’s confidence in the Robotaxi pilot program and its current FSD builds.

Confirmation of Validation

The announcement confirms the sights we’ve been seeing of Tesla’s engineering validation vehicles focusing on areas outside the initial geofence. This public test shows that Tesla was likely finalizing FSD builds and gathering the necessary safety data to push the boundaries for the service, and this upcoming weekend’s expansion will be the first direct result of that work.

Given the increase in service zone size, this expansion will also likely include the addition of more vehicles to the initial Robotaxi fleet of approximately 20 vehicles. We expect the new number to be anywhere from 30-50 vehicles serving both the original and new areas, combined, based on Tesla’s previously expected rollout schedule.

What to Expect

While the exact new boundaries haven’t been released, it is almost certain that the expansion will include the South Congress Bridge and the downtown core areas of Austin. Expanding into a dense urban zone will include more complex intersections, heavy pedestrian traffic, and a unique road layout. That is a major vote of confidence for Robotaxi FSD’s capabilities.

The expansion will also help Tesla to close the service area gap with Waymo, its primary autonomous competitor in the city. This quick expansion is a sign of just how scalable Tesla’s vision-only approach is, versus Waymo’s arduous and drawn-out mapping processes.

We also expect that with this first service zone expansion, Tesla will continue to invite more people to its Robotaxi Network in the coming weeks. Tesla has already sent out various rounds of events, as they’ll need users to continue using the system. If you’re waiting for an invite, it may be time to start getting excited about the next rollout.

Musk: Grok AI Arriving in Teslas Next Week

By Not a Tesla App Staff
Greentheonly / Not a Tesla App

We’ve been hearing about Grok, xAI’s AI assistant, coming to Teslas for almost two years now, but this is finally coming to fruition soon. XAI unveiled Grok 4 last night, but the entire stream didn’t mention Teslas. However, Musk later posted on X that Grok will arrive in Tesla vehicles “by next week.”

Between leaks and the Grok mobile app, there’s a lot we already know about Grok, but there are a few missing pieces that will be cleared when it finally arrives.

Next Week, or Next Next Week?

Musk said that Grok would arrive by next week, meaning it could arrive before then. However, based on how Musk typically states Tesla timelines, there are a few things to consider that give us a better idea of what to expect.

First, whenever Musk posts a Tesla timeline on X, he typically means when it’ll be released to employees and not a public release. Expect this to be the same thing.

Tesla releases software updates to employees first for a final round of testing before starting a gradual release to the public. Sometimes issues are found, especially with FSD updates, and the update needs some fixes before being released publicly. So expect employees to get it by next week, and not necessarily normal Tesla owners.

The second part to this is that Tesla always rolls out their updates gradually, so when it does finally arrive, it’ll only be available on a small percentage of vehicles. Tesla will gradually monitor issues and logs, continuing the rollout as long as no major issues are found.

Which Software Update?

The entire Grok UI was already included in software update 2025.20, but it’s not exposed to users. Typically, a new feature like Grok requires a vehicle update to be added; however, this version may be different, as it’s locked behind a server-side configuration.

The Tesla app was recently updated to support logging in to Grok, so it appears that all or most of the necessary pieces are already in place.

Tesla likely has the ability to enable it for all supported vehicles with a simple switch. However, we feel more confident in it being rolled out in Tesla’s next major update, which is likely to be 2025.24 or 2025.26. Rolling it out in a new update aligns with how Tesla has historically introduced features.

If they turned it on for everyone at the same time, they could be exposing everyone to potential new issues, rather than only a smaller segment of users. While Grok is now well-tested through X and the Grok app, there are several elements that are new in Teslas, likely including the ability to control various vehicle functions, such as opening the glove box or other capabilities that voice commands are currently capable of. The Grok interface in the vehicle is also entirely new and may have some bugs associated with it that will need to be addressed, especially if they impact other features.

What we can likely expect is that Tesla will make some tweaks or bug fixes to Grok with the next major update that weren’t included in update 2025.20 and they’ll begin rolling it out to employees and then customers.

Supported Vehicles

Speaking of supported vehicles, thanks to the behind-the-scenes look at Grok, we have a good idea of the vehicles that will be supported. Tesla uses the same code for most of its vehicles, but then it’s compiled for each type of hardware. However, only the needed code is compiled for each vehicle, meaning that some pieces are left out entirely. Unfortunately, Grok code is not included in Intel software builds, meaning that only AMD Ryzen-based vehicles will receive Grok, at least initially.

We’ve seen Tesla go back and add support for Intel vehicles after it initially released a feature for AMD vehicles. We saw this with the weather radar overlay and several other features in the past. However, Tesla has been developing code with web technologies lately. While this makes development easier, it just doesn’t perform as well on the slower Intel hardware, causing it to be left out. We saw this with the new Dashcam Viewer, which is entirely coded in HTML, CSS, and JS. The new viewer was available on HW3 and HW4 vehicles, but only those that included the Ryzen infotainment processor.

What to Expect

There’s a lot we’re expecting in Grok for Teslas. Some people will absolutely love it because it’ll completely transform their drives from a singular experience to feeling like they have a knowledgeable person sitting right next to them. Given the recent controversies surrounding Grok, some people will strongly oppose it. Hopefully, Tesla makes it easy for those users to turn off Grok.

The voice command system, which is activated through the steering wheel, is expected to be replaced with Grok. This will mean that you’ll be able to talk to your vehicle much more naturally, rather than having to remember specific syntax and commands, which should be a major improvement.

We’re personally looking forward to just being able to ask questions that pop into our heads while driving, such as What’s the date of Tesla’s next event, or How many miles away is Mars? Knowledge will be available at the touch of a finger and more accessible than ever.

Grok is also expected to support continuous conversations, meaning that you’ll be able to hold a conversation with it and go back and forth about a certain topic. While there are hints of a wake word in the code, for now, it seems like you’ll press the steering wheel button once to activate it, and then again to turn it off.

For those excited about AI and Grok, this will be one of the biggest additions to Tesla’s software in years, possibly only rivaled by the Dashcam / Sentry Mode feature and FSD Beta.

It shouldn’t be long now before we all have a chance to try it out for ourselves.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter