Tesla FSD V12.4.1 Goes Out: Exploring No Nags and More [Video]

By Karan Singh
Tesla will now alert you when vision-monitoring isn't being used
Tesla will now alert you when vision-monitoring isn't being used
@WifeDirtyTesla

With FSD V12.4.1 finally beginning its rollout to select customers as of last night. You’re probably wondering exactly how nags will – or won’t – work for the updated and much-hyped update.

No Steering Wheel Nags

Tesla’s current implementation of no steering wheel nags on V12.4.1 is pretty simple and straightforward. As long as you’re paying attention and looking at the road, you won’t be required to touch the steering wheel. You’ll see a green dot on the screen, letting you know that the enhanced driver monitoring system (DMS) and Vision-Based Attention Monitoring (VBAM) are active.

On the Model S and X, the green dot is on the instrument cluster screen, immediately next to the blue FSD/AP wheel icon. On the Model 3 and Model Y (and Cybertruck, in the future), the green dot indicator is on the top left of the screen, in between the battery indicator and the blue FSD/AP wheel.

However, there are some restrictions baked into this initial implementation. Your eyes cannot be obscured or occluded from the cabin camera. This means that legacy vehicles are ineligible for the new VBAM, along with anyone who installs a physical camera cover for privacy or other reasons.

For the privacy-conscious folks, Tesla has mentioned that cabin camera imagery will not leave the vehicle itself unless you enable data sharing, which is optional. Cabin camera imagery is also not available to view via the API, so third-party integrations cannot view your cabin camera either.

The green dot on the center display
The green dot on the center display
Whole Mars Catalog

Restrictions

There are some other catches too. The cabin camera is currently unable to see through sunglasses due to the polarization. The car will display “Attention monitoring unavailable, sunglasses use detected” on the screen. This could change in the future as Tesla figures out how to best take advantage of its cabin cameras. However, it can see through regular glasses just fine – so eyeglass wearers, rejoice!

Attention monitoring unavailable, sunglasses use detected

Vehicles that do not have IR lights in the cabin will also not be able to take advantage of VBAM at night – as the cameras in vehicles without IR lights are unable to see at night. Tesla does offer a refit for vehicles to upgrade to IR-capable cameras – put in a service ticket if you’re interested through the Service Menu on the app.

If it cannot find your eyes due to any of these restrictions, the green light will not come on, and the regular wheel nags that you are used to will continue.

Warnings and Suspensions

If VBAM determines that you’re not paying attention – initially a screen warning will appear, telling you to pay attention to the road. This can be dismissed quickly by just reverting your attention to the road ahead of you. You won’t have to touch the steering wheel to dismiss the nag.

However, if you continue to not pay attention and the DMS detects improper usage, you will receive an Autopilot Strikeout, and FSD will disengage. Before a Strikeout occurs, there will be multiple auditory and visual warnings, ensuring you have a few moments to bring your attention back to supervising FSD.

You can receive up to 5 Strikeouts before the FSD becomes suspended. One strikeout will be lifted per 7-day period in which you do not receive a Strikeout. If you hit 5 Strikeouts, it could be up to 5 weeks before you clear all of them! If you receive another Strikeout within that 7-day period after an initial Strikeout, the 7-day period is reset.

Other Changes

Elon Musk has mentioned that V12.4 was supposed to be focused on user comfort, by reducing hard acceleration and braking. According to Musk, it should have a 5-10x improvement between user disengagements.

Early Access owners have mentioned that 12.4 tends to be more assertive and less hesitant when it comes to intersections, stop signs, and parking lots. Owners have also noticed improvements in the “lane dancing”, where FSD V12.3 would stray in between lanes for too long while changing lanes.

Of additional note is that Vision Autopark is slightly faster – but this is the same Vision Autopark speed increase that rolled out to customers who have already received the Spring Update. For everyone else, expect a 2-3x improvement in how fast Vision Autopark changes directions, and how fast it maneuvers in general. As of the Spring Update, it can now park in even tighter spaces.

Another much-appreciated feature is the ability to temporarily increase the sensitivity of Autowipers. As many have experienced, the Autowiper functionality doesn’t always work well. However, with the Spring Update, you can now temporarily increase the sensitivity of the Autowiper system by tapping once on the wiper stalk (or button on stalkless vehicles).

Missed Features

Sadly, some previously announced features were missed out on in this release of FSD V12.4.1. Namely, the key features of Banish Autopark and Park Seek. For the time being, users will still have to disengage FSD and then engage Autopark once they find their parking spot.

Banish Autopark, or “Reverse Summon” was thought to arrive in V12.4 as part of the comfort update according, allowing you to choose a parking spot type preference, exit the vehicle, and then have the car park itself.

Additionally, Park Seek – which would allow FSD to automatically find a parking spot in a parking lot, and then engage Autopark automatically, was initially a confirmed feature, but is not present in this release.

Finally, Hand Gesture recognition was supposed to come in an update “later in May” – but given that FSD V12.4 has missed previous deadlines – no surprise to people familiar with the “2 week policy” – there is no confirmation yet if that feature has made it into this build. It is very possible that the employee in question may have been referring to V12.5 – which is also expected to bring vehicle-to-fleet communication.

Update 2024.15.5

FSD Supervised 12.4.1
Installed on 0% of fleet
0 Installs today
Last updated: Mar 19, 6:06 pm UTC

Expected Wide Release

Given that it just rolled out to employees yesterday, and then to “OG” FSD Beta owners today, we could expect 2024.15.5 – the version that contains V12.4.1 – to hopefully continue rolling out to customers next week. Everyone with an update under 2024.15.5 - so users on 2024.3.252024.8.9, and 2024.14.11 – should be eligible to receive this update. The very few vehicles already on 2024.20 with the Adaptive Headlights functionality will have to wait a bit longer!

Tesla Debuts Super Manifold V2 in the New Model Y—But Not Every Car Has It Yet

By Not a Tesla App Staff
Tesla Service Manual

The Super Manifold is Tesla’s solution to reducing the complexity of a heat pump system for an EV. Tesla showed off its engineering chops back with the original Model Y in 2019, where it introduced a new 8-way valve (the Octovalve) and a new heat pump alongside the uniquely designed Super Manifold to improve efficiency.

Now, Tesla is launching an improved version with the refreshed Model Y - the Super Manifold V2. We got to hear about it thanks to Sandy Munro’s interview with Tesla’s Lars Moravy (Vice President of Vehicle Engineering) and Franz Von Holzhausen (Chief of Vehicle Design). You can watch the video further below.

What Is The Super Manifold?

The Super Manifold (get it, Superman?), is an all-in-one package that brings in all the components of a heat pump system into one component. The Super Manifold packs all the refrigerant and coolant components around a 2-layer PCB (printed circuit board).

This Super Manifold would normally have 15 or 20 separate components, but Tesla managed to integrate them all into one nice package. That presented Tesla with a new challenge: how to integrate a heat pump—capable of both heating and cooling—into a single, efficient platform?

Several years ago, Tesla designed the Octovalve. It combines inlets and outlets and can variably change between heating or cooling on the fly - without needing to be plumbed in different directions. This is especially important for EVs, which may need to heat the battery with the waste heat generated from the motors or the heat pump while also cooling the cabin - or vice versa.

Original Super Manifold V1.1

Tesla launched the Super Manifold V1.1 back in 2022, and it provided some minor improvements to the waste heat processing of the heat exchange system. It also tightened up the Octovalve, preventing the leakage of oils into the HVAC loop that could cause it to freeze at extremely low temperatures.

Tesla has been using the V1.1 for several years now, and it has really solved the vast majority of issues with the heat pump system that many older Model Ys experienced.

Super Manifold V2 Coming Soon

Now, Tesla is introducing the Super Manifold V2 in the new Model Y. It will improve the overall cooling capacity provided by the original Super Manifold, but unfortunately, not every single new Model Y will come with it equipped. Tesla will be introducing it slowly across the lineup and at different rates at different factories, depending on part availability.

Eventually, the Super Manifold V2 will also make its way to other vehicles, potentially including the upcoming refresh for the Model S and Model X, but initially, it’ll be exclusive to the new Model Y. Tesla expects to have the new manifold in every new Model Y later this year.

If you’re interested in checking out the whole video, we’ve got it for you below.

Breaking Down Tesla’s Autopilot vs. Wall “Wile E. Coyote” Video

By Not a Tesla App Staff
Mark Rober

Mark Rober, of glitter bomb package fame, recently released a video titled Can You Fool A Self-Driving Car? (posted below). Of course, the vehicle featured in the video was none other than a Tesla - but there’s a lot wrong with this video that we’d like to discuss.

We did some digging and let the last couple of days play out before making our case. Mark Rober’s Wile E. Coyote video is fatally flawed.

The Premise

Mark Rober wanted to prove whether or not it was possible to fool a self-driving vehicle, using various test scenarios. These included a wall painted to look like a road, low-lying fog, mannequins, hurricane-force rain, and bright beams.

All of these individual “tests” had their own issues - not least because Mark didn’t adhere to any sort of testing methodology, but because he was looking for a result - and edited his tests until he was sure of it.

Interestingly, many folks on X were quick to spot that Mark had been previously sponsored by Google to use a Pixel phone - but was using an iPhone to record within the vehicle - which he had edited to look like a Pixel phone for some reason. This, alongside other poor edits and cuts, led many, including us, to believe that Mark’s testing was edited and flawed.

Flaw 1: Autopilot, Not FSD

Let’s take a look at the first flaw. Mark tested Autopilot - not FSD. Autopilot is a driving aid for lane centering and speed control - and is not the least bit autonomous. It cannot take evasive maneuvers outside the lane it is in, but it can use the full stable of Tesla’s extensive features, including Automatic Emergency Braking, Forward Collision Warnings, Blind Spot Collision Warnings, and Lane Departure Avoidance.

On the other hand, FSD is allowed and capable of departing the lane to avoid a collision. That means that even if Autopilot tried to stop and was unable to, it would still impact whatever obstacle was in front of it - unlike FSD.

As we continue with the FSD argument - remember that Autopilot is running on a 5-year-old software stack that hasn’t seen updates. Sadly, this is the reality of Tesla not updating the Autopilot stack for quite some time. It seems likely that they’ll eventually bring a trimmed-down version of FSD to replace Autopilot, but that hasn’t happened yet.

Mark later admitted that he used Autopilot rather than FSD because “You cannot engage FSD without putting in a destination,” which is also incorrect. It is possible to engage FSD without a destination, but FSD chooses its own route. Where it goes isn’t within your control until you select a destination, but it tends to navigate through roads in a generally forward direction.

The whole situation, from not having FSD on the vehicle to not knowing you can activate FSD without a destination, suggests Mark is rather unfamiliar with FSD and likely has limited exposure to the feature.

Let’s keep in mind that FSD costs $99 for a single month, so there’s no excuse for him not using it in this video.

Flaw 2: Cancelling AP and Pushing Pedals

Many people on X also followed up with reports that Mark was pushing the pedals or pulling on the steering wheel. When you tap on the brake pedal or pull or jerk the steering wheel too much, Autopilot will disengage. For some reason, during each of his “tests,” Mark closely held the steering wheel of the vehicle.

This comes off as rather odd - at the extremely short distances he was enabling AP at, there wouldn’t be enough time for a wheel nag or takeover warning required. In addition, we can visibly see him pulling the steering wheel before “impact” in multiple tests.

Over on X, techAU breaks it down excellently on a per-test basis. Mark did not engage AP in several tests, and he potentially used the accelerator pedal during the first test - which means that Automatic Emergency Braking is overridden. In another test, Mark admitted to using the pedals.

Flaw 3: Luminar Sponsored

This video was potentially sponsored by a LiDAR manufacturer - Luminar. Although Mark says that this isn’t the case. Interestingly, Luminar makes LiDAR rigs for Tesla - who uses them to test ground truth accuracy for FSD. Just as interesting, Luminar’s Earnings Call was also coming up at the time of the video’s posting.

Luminar had linked the video at the top of their homepage but has since taken it down. While Mark did not admit to being sponsored by Luminar, there appear to be more distinct conflicts of interest, as Mark’s charity foundation has received donations from Luminar’s CEO.

Given the positivity of the results for Luminar, it seems that the video had been well-designed and well-timed to take advantage of the current wave of negativity against Tesla, while also driving up Luminar’s stock.

Flaw 4: Vision-based Depth Estimation

The next flaw to address is the fact that humans and machines can judge depth using vision. On X, user Abdou ran the “invisible wall” through a monocular depth estimation model (DepthAnythingV2) - one that uses a single image with a single angle. This fairly simplified model can estimate the distance and depth of items inside an image - and it was able to differentiate the fake wall from its surroundings easily.

Tesla’s FSD uses a far more advanced multi-angle, multi-image tool that stitches together and creates a 3D model of the environment around it and then analyzes the result for decision-making and prediction. Tesla’s more refined and complex model would be far more able to easily detect such an obstacle - and these innovations are far more recent than the 5-year-old Autopilot stack.

While detecting distances is more difficult in a single image, once you have multiple images, such as in a video feed, you can more easily decipher between objects and determine distances by tracking the size of each pixel as the object approaches. Essentially, if all pixels are growing at a constant rate, then that means it’s a flat object — like a wall.

Case in Point: Chinese FSD Testers

To make the case stronger - some Chinese FSD testers took to the streets and put up a semi-transparent sheet - which the vehicle refused to drive through or drive near. It would immediately attempt to maneuver away each time the test was engaged - and refused to advance with a pedestrian standing in the road.

Thanks to Douyin and Aaron Li for putting this together, as it makes an excellent basic example of how FSD would handle such a situation in real life.

Flaw 5: The Follow-Up Video and Interview

Following the community backlash, Mark released a video on X, hoping to resolve the community’s concerns. However, this also backfired. It turned out Mark’s second video was of an entirely different take than the one in the original video - this was at a different speed, angle, and time of initiation.

Mark then followed up with an interview with Philip DeFranco (below), where he said that there were multiple takes and that he used Autopilot because he didn’t know that FSD could be engaged without a destination. He also answered here that Luminar supposedly did not pay him for the video - even with their big showing as the “leader in LiDAR technology” throughout the video.

Putting It All Together

Overall, Mark’s video was rather duplicitous - he recorded multiple takes to get what he needed, prevented Tesla’s software from functioning properly by intervening, and used an outdated feature set that isn’t FSD - like his video is titled.

Upcoming Videos

Several other video creators are already working to replicate what Mark “tried” to test in this video.

To get a complete picture, we need to see unedited takes, even if they’re included at the end of the video. The full vehicle specifications should also be disclosed. Additionally, the test should be conducted using Tesla’s latest hardware and software—specifically, an HW4 vehicle running FSD v13.2.8.

In Mark’s video, Autopilot was engaged just seconds before impact. However, for a proper evaluation, FSD should be activated much earlier, allowing it time to react and, if capable, stop before hitting the wall.

A wave of new videos is likely on the way—stay tuned, and we’ll be sure to cover the best ones.

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

More Tesla News

Tesla Videos

Latest Tesla Update

Confirmed by Elon

Take a look at features that Elon Musk has said will be coming soon.

Subscribe

Subscribe to our weekly newsletter