Inside Tesla’s FSD: Patent Explains How FSD Works

By Karan Singh
Not a Tesla App

Thanks to a Tesla patent published last year, we have a great look into how FSD operates and the various systems it uses. SETI Park, who examines and writes about patents, also highlighted this one on X.

This patent breaks down the core technology used in Tesla’s FSD and gives us a great understanding of how FSD processes and analyzes data.

To make this easily understandable, we’ll divide it up into sections and break down how each section impacts FSD.

Vision-Based

First, this patent describes a vision-only system—just like Tesla’s goal—to enable vehicles to see, understand, and interact with the world around them. The system describes multiple cameras, some with overlapping coverage, that capture a 360-degree view around the vehicle, mimicking but bettering the human equivalent.

What’s most interesting is that the system quickly and rapidly adapts to the various focal lengths and perspectives of the different cameras around the vehicle. It then combines all this to build a cohesive picture—but we’ll get to that part shortly.

Branching

The system is divided into two parts - one for Vulnerable Road Users, or VRUs, and the other for everything else that doesn’t fall into that category. That’s a pretty simple divide - VRUs are defined as pedestrians, cyclists, baby carriages, skateboarders, animals, essentially anything that can get hurt. The non-VRU branch focuses on everything else, so cars, emergency vehicles, traffic cones, debris, etc. 

Splitting it into two branches enables FSD to look for, analyze, and then prioritize certain things. Essentially, VRUs are prioritized over other objects throughout the Virtual Camera system.

The many data streams and how they're processed.
The many data streams and how they're processed.
Not a Tesla App

Virtual Camera

Tesla processes all of that raw imagery, feeds it into the VRU and non-VRU branches, and picks out only the key and essential information, which is used for object detection and classification.

The system then draws these objects on a 3D plane and creates “virtual cameras” at varying heights. Think of a virtual camera as a real camera you’d use to shoot a movie. It allows you to see the scene from a certain perspective.

The VRU branch uses its virtual camera at human height, which enables a better understanding of VRU behavior. This is probably due to the fact that there’s a lot more data at human height than from above or any other angle. Meanwhile, the non-VRU branch raises it above that height, enabling it to see over and around obstacles, thereby allowing for a wider view of traffic.

This effectively provides two forms of input for FSD to analyze—one at the pedestrian level and one from a wider view of the road around it.

3D Mapping

Now, all this data has to be combined. These two virtual cameras are synced - and all their information and understanding are fed back into the system to keep an accurate 3D map of what’s happening around the vehicle. 

And it's not just the cameras. The Virtual Camera system and 3D mapping work together with the car’s other sensors to incorporate movement data—speed and acceleration—into the analysis and production of the 3D map.

This system is best understood by the FSD visualization displayed on the screen. It picks up and tracks many moving cars and pedestrians at once, but what we see is only a fraction of all the information it’s tracking. Think of each object as having a list of properties that isn’t displayed on the screen. For example, a pedestrian may have properties that can be accessed by the system that state how far away it is, which direction it’s moving, and how fast it’s going.

Other moving objects, such as vehicles, may have additional properties, such as their width, height, speed, direction, planned path, and more. Even non-VRU objects will contain properties, such as the road, which would have its width, speed limit, and more determined based on AI and map data.

The vehicle itself has its own set of properties, such as speed, width, length, planned path, etc. When you combine everything, you end up with a great understanding of the surrounding environment and how best to navigate it.

The Virtual Mapping of the VRU branch.
The Virtual Mapping of the VRU branch.
Not a Tesla App

Temporal Indexing

Tesla calls this feature Temporal Indexing. In layman’s terms, this is how the vision system analyzes images over time and then keeps track of them. This means that things aren’t a single temporal snapshot but a series of them that allow FSD to understand how objects are moving. This enables object path prediction and also allows FSD to understand where vehicles or objects might be, even if it doesn’t have a direct vision of them.

This temporal indexing is done through “Video Modules”, which are the actual “brains” that analyze the sequences of images, tracking them over time and estimating their velocities and future paths.

Once again, heavy traffic and the FSD visualization, which keeps track of many vehicles in lanes around you—even those not in your direct line of sight—are excellent examples.

End-to-End

Finally, the patent also mentions that the entire system, from front to back, can be - and is - trained together. This training approach, which now includes end-to-end AI, optimizes overall system performance by letting each individual component learn how to interact with other components in the system.

How everything comes together.
How everything comes together.
Not a Tesla App

Summary

Essentially, Tesla sees FSD as a brain, and the cameras are its eyes. It has a memory, and that memory enables it to categorize and analyze what it sees. It can keep track of a wide array of objects and properties to predict their movements and determine a path around them. This is a lot like how humans operate, except FSD can track unlimited objects and determine their properties like speed and size much more accurately. On top of that, it can do it faster than a human and in all directions at once.

FSD and its vision-based camera system essentially create a 3D live map of the road that is constantly and consistently updated and used to make decisions.

Ordering a New Tesla?

Consider using our referral code (karan29050) to get up to $1,000 off your Tesla.

Tesla Eliminates Front Casting on New Model Y; Improves Rear Casting

By Not a Tesla App Staff
Not a Tesla App

Tesla has pioneered the use of single-piece castings for the front and rear sections of their vehicles, thanks to its innovative Gigapress process. Many automakers are now following suit, as this approach allows the crash structure to be integrated directly into the casting.

This makes the castings not only safer but also easier to manufacture in a single step, reducing costs and improving repairability. For example, replacing the entire rear frame of a Cybertruck is estimated to cost under $10,000 USD, with most of the expense coming from labor, according to estimates shared on X after high-speed rear collisions.

These insights come from Sandy Munro’s interview (posted below) with Lars Moravy, Tesla’s VP of Vehicle Engineering, highlighting how these advancements contribute to the improvements in Tesla’s latest vehicles, including the New Model Y.

However, with the new Model Y, Tesla has decided to go a different route and eliminated the front gigacast.

No Front Casting

Tesla’s factories aren’t equipped to produce both front and rear castings for the Model Y. Only Giga Texas and Giga Berlin used structural battery packs, but these were quickly phased out due to the underwhelming performance of the first-generation 4680 battery.

Tesla has gone back to building a common body across the globe, increasing part interchangeability and reducing supply chain complexity across the four factories that produce the Model Y. They’ve instead improved and reduced the number of unique parts up front to help simplify assembly and repair.

There is still potential for Tesla to switch back to using a front and rear casting - especially with their innovative unboxed assembly method. However, that will also require Tesla to begin using a structural battery pack again, which could potentially happen in the future with new battery technology.

Rear Casting Improvements

The rear casting has been completely redesigned, shedding 7 kg (15.4 lbs) and cutting machining time in half. Originally weighing around 67 kg (147 lbs), the new casting is now approximately 60 kg (132 lbs).

This 15% weight reduction improves both vehicle dynamics and range while also increasing the rear structure’s stiffness, reducing body flex during maneuvers.

Tesla leveraged its in-house fluid dynamics software to optimize the design, resulting in castings that resemble organic structures in some areas and flowing river patterns in others. Additionally, manufacturing efficiency has dramatically improved—the casting process, which originally took 180 seconds per part, has been reduced to just 75 seconds, a nearly 60% time reduction per unit.

New Casting Methods

Tesla’s new casting method incorporates conformal cooling, which cools the die directly within the gigapress. Tesla has been refining the die-casting machines and collaborating with manufacturers to improve the gigapress process.

In 2023, Tesla patented a thermal control unit for the casting process. This system uses real-time temperature analysis and precise mixing of metal streams to optimize casting quality. SETI Park, which covers Tesla’s manufacturing patents on X, offers a great series for those interested in learning more.

The new system allows Tesla to control the flow of cooling liquid, precisely directing water to different parts of the die, cooling them at varying rates. This enables faster material flow and quicker cooling, improving both dimensional stability and the speed of removing the part from the press for the next stage.

With these new process improvements, Tesla now rolls out a new Model Y at Giga Berlin, Giga Texas, and Fremont every 43 seconds—an astounding achievement in auto manufacturing. Meanwhile, Giga Shanghai operates two Model Y lines, delivering a completed vehicle every 35 seconds.

Tesla’s Hands-Free Trunk and Frunk – Supported Models, Phones & How to Set Them Up

By Karan Singh
Not a Tesla App

Having the ability to open your trunk hands-free can be incredibly useful when your hands are full, especially in a busy parking lot.

Tesla vehicles now support opening the vehicle’s trunk or frunk completely hands-free — no foot waving required.

What is Hands-Free Frunk and Trunk?

Tesla implemented its hands-free feature by leveraging your phone’s position in relation to the vehicle. When you stand still behind your vehicle, the trunk will automagically open for you.

While this functionality isn’t available on every vehicle, it’s available on every vehicle Tesla manufactures today, including the new Model Y, the Cybertruck and other recent models.

With a compatible device and a supported vehicle, you can now open your Tesla’s trunk hands-free.

How It Works

Tesla’s hands-free feature requires the use of ultrawide-band (UWB) in the vehicle and on your phone. Apple and Samsung have supported ultra-wideband for a number of years and most flagship Android devices also support the low-energy feature.

Ultra-wideband allows another device to precisely detect its relative location. In this case, the vehicle is tracking where the driver’s phone is in relation to the vehicle. Since the vehicle is able to more precisely track the phone’s location, ultra-wideband also improves Tesla’s phone key feature.

Since the vehicle depends on your phone, you’ll need to have your phone on you in order to activate the hands-free feature. Simply stand within 2.5 to 3 feet from the front or rear of your vehicle for the frunk or trunk to open. You’ll then hear a couple of chimes. If you continue to stand still, then your frunk or trunk will open automatically.

The chimes serve as a warning that the trunk will open if you don’t move, which helps reduce accidental openings.

Hands-Free Trunk in Action

The video below shows how Tesla’s hands-free trunk feature works.

Supported Models

Since Tesla uses ultra-wideband to power the hands-free feature, only vehicles with the needed hardware are supported. The list of supported vehicles includes:

  • 2021 Model S and later

  • 2021 Model X and later

  • 2024 Model 3 (Highland) and later

  • 2026 Model Y (Juniper) and later

  • All Cybertrucks

Supported Phones

Your phone will also need to support UWB. Luckily, most manufacturers have included UWB in their devices for several years.

Apple: All Apple devices since the iPhone 11 have included UWB, except for the iPhone SE (2nd and 3rd generation). The iPhone 16e also has UWB.

Android: Most Android phones - especially flagship devices - already support and use UWB for other uses, but it’s not available on all phones. If you have a Google Pixel 6 or higher, Samsung Fold 2 or higher, Samsung S21+, or other recent Android phone, then your phone already supports ultra wideband.

Which Models Support Hands-Free Frunk

Unfortunately, not every supported model supports the hands-free frunk and trunk feature. The hands-free frunk feature is only supported on the Model S, Model X, and the Cybertruck. In addition, the Cyebrtruck is the only vehicle with a powered frunk, so while the Model S and Model X will unlock the frunk for you, you’ll still need to lift it and close it manually. The Cybertruck will open the frunk for you, much like the trunk on another Tesla.

Which Models Support Hands-Free Trunk

While most supported Tesla vehicles can use the hands-free trunk, it excludes the Cybertruck, which doesn’t have a powered trunk.

Enable Hands-Free Trunk / Frunk

If you plan to use your vehicle’s hands-free trunk feature, you’ll need to enable it in settings, as it’s off by default. Simply open Controls by tapping the vehicle icon in the bottom left corner, then navigate to the Locks section.

Within the Hands-Free section, you’ll find a few options, depending on your model. You’ll be able to choose whether to enable the hands-free frunk or trunk and whether you’d like to disable the feature at home.

Preventing Accidental Opening - Exclude Home

Although the hands-free feature requires you to stay still in front or behind your vehicle for a couple of seconds, it can still be triggered accidentally if you’re working around your garage. To prevent accidental opening of the frunk or trunk, Tesla allows you to disable the feature while your vehicle is parked at home.

Tesla determines your home location by the address that’s set in your vehicle. However, it also adds a buffer, meaning that your hands-free trunk feature will also not work in your driveway or at your neighbor’s home. The exclude home feature is located in the same spot as other hands-free trunk features, Controls > Locks > Hands-Free > Exclude Home.

If you have a recent Tesla that’s supported, go ahead and give the feature a try.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter