Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.
This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.
To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.
Vision-Based
First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.
What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.
Branching
The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc.
Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.
The many data streams and how they're processed.
Not a Tesla App
Virtual Camera
Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.
The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.
The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.
This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.
3D Mapping
Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle.
And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.
This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.
Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.
The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.
The Virtual Mapping of the VRU branch.
Not a Tesla App
Temporal Indexing
Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.
This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.
Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.
End-to-End
Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.
How everything comes together.
Not a Tesla App
Summary
Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.
FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.
In the latest episode of Jay Leno’s Garage, Tesla’s VP of Vehicle Engineering, Lars Moravy, confirmed that the new Model Y will feature adaptive headlights.
As Moravy was talking about the updated headlights in the vehicle, which now sit a few inches lower than before, he stated that in a couple of months, Tesla will add adaptive headlights in the U.S.
While Tesla has already introduced adaptive headlights in Europe and the Indo-Pacific, the feature has yet to make its way to North America.
Originally delayed in the U.S. due to regulatory issues, manufacturers have been able to implement adaptive headlights since mid-2024. Meanwhile, competitors like Rivian and Mercedes-Benz have already rolled out their own full matrix headlight systems, matching what’s available in other regions.
Update: This article has been updated to clarify that adaptive headlights will indeed be launched in the U.S., shortly after the vehicle launching in March.
Currently, Tesla in North America supports adaptive high beams and automatic headlight adjustment for curves, but full matrix functionality has yet to be rolled out. Meanwhile, matrix headlights are already available in Europe, where they selectively dim individual beam pixels to reduce glare for oncoming traffic and adapt to curves in the road.
It was surprising that matrix functionality wasn’t included in the comprehensive 2024 Tesla Holiday Update. This feature would likely improve safety ratings, so we can only assume Tesla is diligently working to secure regulatory approval.
Adaptive Headlights on Other Models
Lars didn’t confirm whether the refreshed Model Y comes with the same headlights as the new Model 3 and the Cybertruck, instead simply calling them "matrix-style” headlights.
The headlights on the new Model Y appear very similar to those available in the 2024+ Model 3, possibly meaning these other models will also receive adaptive headlight capabilities in the next couple of months.
For vehicles with older-style matrix headlights, it’s unlikely that adaptive beams support will launch at the same time, but they will hopefully become available soon afterward.
For the first time since launching Tesla Insurance in 2019, Tesla will begin underwriting its own policies, starting in California.
Tesla Insurance originally debuted in California and has since expanded to several U.S. states. Until now, policies were underwritten by State National, a subsidiary of the Markel Insurance Group. However, Tesla is now transitioning to fully in-house underwriting, beginning with its home state.
As part of this shift, California Tesla Insurance customers who receive an in-app offer to switch will be eligible for a one-time 3% discount on their next term’s premium—covered entirely by Tesla Insurance.
What is Underwriting
Underwriting is the process an insurance company uses to assess risk and determine whether to offer coverage, at what price, and under what terms.
Insurers evaluate factors such as driving history, credit score, age, vehicle type, and location. In Tesla’s case, vehicle driving data (not available in California) also plays a key role in risk assessment. These factors help classify drivers into risk categories, which influence their base premium.
From there, coverage limits, deductibles, and policy inclusions or exclusions can further adjust the final premium up or down.
Robotaxi and Other Benefits
At first glance, underwriting insurance might seem like a complex and costly process for Tesla. However, there are several compelling reasons why this move makes sense.
Insurance Income: Insurance is a highly profitable industry. Companies set rates based on risk, offering lower premiums to safer drivers and higher rates to riskier ones. This not only maximizes profitability but also incentivizes safer driving behavior, reducing overall claims.
Data Advantage: Tesla collects vast amounts of driving data through its Safety Score system. While California doesn’t allow Safety Score to impact premiums, Tesla can still use this data in the underwriting process to refine risk assessments and pricing for its vehicles.
Control Over Repair Costs: By underwriting its own policies, Tesla gains direct control over repairs and total loss decisions. This allows them to dictate when, where, and how repairs are done, optimizing costs for parts, labor, and service while ensuring vehicles are fixed according to Tesla’s standards.
FSD-Driven Discounts: Tesla has already begun offering insurance discounts for drivers using Full Self-Driving (FSD). By underwriting its own policies, Tesla could expand these incentives, potentially offering greater discounts to frequent FSD users in the future.
Preparing for Robotaxi: Perhaps the biggest long-term reason for this shift is the June launch of the Robotaxi fleet. How will Tesla insure these vehicles? The answer is simple—by underwriting its own policies and assuming liability.
Tesla’s decision to underwrite its own insurance isn’t just about cutting out middlemen—it’s a step toward lowering costs, increasing profitability, and preparing for the future of autonomous driving, a risk many insurance companies may be unwilling to make.
Further Expansion
This could be a strong sign that Tesla is preparing to expand its insurance offerings now that it has taken on the underwriting process itself. In July 2024, Tesla hired a former GEICO insurance executive to lead the expansion of Tesla Insurance and help reduce costs—a move that now appears to be paying off.
Rather than a traditional expansion, Tesla has instead made a bold move by bringing underwriting in-house, something few expected. However, it aligns with Tesla’s strategy of vertically integrating and controlling key aspects of its business, whether in manufacturing, software, or now, insurance.
If this pilot program proves successful, it could pave the way for Tesla Insurance to launch in more states—and potentially even other countries. With 2025 shaping up to be a pivotal year, we may see Tesla accelerate its insurance expansion sooner than expected.