Sandy Munro gets a chance to go in a Ford Mach E with a representative of Ford, and he experiences Ford's Blue Cruise. The Ford guy just wants to talk about Blue Cruise, but Sandy can't seem to restrain himself from making comparisons with Tesla. This is probably not what the Ford guy wanted him to do.
First of all, it was clear that Ford's Blue Cruise is available only on limited access highways. And Blue Cruise isn't connected to the GPS navigation system at all. It won't steer you off at your appropriate off ramp. So the appropriate comparison is Blue Cruise vs Autopilot. Autopilot works on any road where there are painted lane markers, Blue Cruise works only on limited access highways. And Blue Cruise won't change lanes for you. You need to take control and do that manually. Ford is thinking about adding that feature in the future.
Ford advertises that Blue Cruise is a hands-free system. It uses a cabin-facing camera to make sure that the driver is paying attention. It will allow the driver to look away from the road briefly, but it will send the driver a reminder to pay attention after a few seconds of inattentive driving. Blue Cruise will stop working if the driver continues to be inattentive. The video doesn't make clear what, exactly, happens when Blue Cruise stops working due to the driver's sudden inability to pay attention.
There's another issue with hands-free driving. It's a reaction time issue. A long time ago in a previous millennium, when I took Driver Education, we saw a movie that I still remember. In the movie, a car was rigged so the front seat passenger, the instructor, could secretly push a button that would make a noise and shoot a chalk mark onto the road. The driver was told to immediately slam on the brake. At that point, the car shot another chalk mark onto the road. Then, a bit farther along, the car came to a stop.
It takes the brain about three quarters of a second to realize that an emergency exists and another three quarters of a second to move the foot off the accelerator and get it slammed down on the brake. At 65 miles an hour, the car will have traveled 96 feet down the road before a human being can initiate the stopping process. A few phantom braking events are a small price to pay for a quicker response to an emergency stop situation. So what does that have to do with hands-free Blue Cruise?
The brain still needs to see a situation that demands a sudden avoidance maneuver. Then it must decide which way to turn the steering wheel and how much to turn it. That delay will increase by the same one and a half seconds of effective inaction if the brain has to first figure out how to get the hands onto the steering wheel and then get them there before it can initiate a calculation about which way and how much to steer for the avoidance maneuver. That means you're 96 feet further down the road before you start the avoidance maneuver if your hands are off the steering wheel when an emergency becomes evident.
So I have to say that for level 2 autonomy, where the driver may need to take over quickly, it's better if the hands are already on the wheel.
There was another odd quirk about Blue Cruise. Sandy Munro was chatting with the Ford guy when he suddenly noticed a visual, but not audible, warning that he had to take control of steering. This was on a limited access highway going seventy miles an hour. It's a good thing he noticed it because Blue Cruise poops out when it sees a "sharp curve."
This particular curve was on a limited access highway. It wasn't sharp enough to require a reduction in the speed limit. I foresee that this will become an issue when drivers don't notice that Blue Cruise has quit working and the car drifts to the outside of the curve and out of its lane.
Autopilot stops working around sharp curves, too. But Autopilot gives an audible signal, your hands are already on the wheel, and Autopilot's definition of a sharp curve is one where you have to slow down to 20mph, not one that you can negotiate at highway speeds.
My advice? Don't ride in a car with Ford Blue Cruise.
Ford's Blue Cruise technology is way behind what Tesla offers today, not even considering the FSD Beta, which should become available to the public later this year. As Tesla advances further in vehicle autonomy, they'll be solving problems Ford has yet to come across. It's great to see that Tesla has started the electric revolution in cars, but Tesla's competition won't come with existing auto makers.
Tesla’s latest vehicles, including the Cybertruck, Cybercab, and the refreshed Model Y, now feature a front bumper camera. However, as of FSD v13.2.8, the Cybertruck’s bumper camera remains unused for FSD and primarily serves as a helpful tool for parking and off-road driving.
With bumper cameras becoming more common across Tesla’s lineup, the question remains: will they eventually become a necessary component for Unsupervised FSD, or are they simply an added convenience for now?
Actually Smart Summon Needs Bumper Vision
Not a Tesla App
Every Tesla model that has the ability to use Actually Smart Summon occasionally rolls slightly forward or backward before exiting a parking stall. This movement helps the vehicle get a better view of what’s directly beneath the front lip of the hood before proceeding.
However, this behavior has led some vehicles to make contact with walls or posts, prompting the NHTSA to launch an investigation into Actually Smart Summon. The simple solution is to mount a lower front camera that allows the vehicle to see what’s directly in front of it when it wakes up.
The Cybertruck currently lacks access to Actually Smart Summon—or any Summon functionality, for that matter. Tesla hasn’t announced when the vehicle will receive one of its most advanced autonomy features. Given the vehicle’s height and its larger front blind spot, the delay likely stems from the need to integrate the bumper camera for improved visibility.
At the end of the day, Actually Smart Summon is essential for Unsupervised FSD. A fully autonomous vehicle must be capable of navigating crowded parking lots, reaching pickup points, and parking itself without human intervention.
Training Data and Cameras
We already know that adding a new vehicle to FSD can take months—but what about integrating training data from an entirely new camera and perspective? That process could take even longer, especially with a vehicle like the Cybertruck, which is larger and wider than Tesla’s other models.
We also know that the Cybercab—set to launch in Austin in just a few months—features a bumper camera to improve visibility below the front lip. Tesla doesn’t add new components without purpose; every part, from the camera and wiring to the housing and engineering, represents a calculated investment.
Given this, it’s reasonable to expect that Tesla is already using bumper camera data from the Cybertruck—and soon, the refreshed Model Y—to train an updated FSD model. Whether this model is focused on parking lot navigation and Actually Smart Summon or expands to broader FSD improvements on city streets and highways remains to be seen.
Compute and AI5
Tesla has already stated that the AI4 computer has unused compute power, but they’re running into memory limitations in future FSD builds due to the sheer volume of incoming data. That said, Tesla has hinted at optimizations to better manage memory on AI4.
Would integrating data from an additional camera overwhelm the system? Probably not in terms of compute, but memory efficiency remains a key area for improvement—especially as Tesla plans to triple both the model size and context window in upcoming FSD versions.
On the other hand, the Cybercab is set to launch with its own unique, more powerful AI5 computer. At the We, Robot autonomy event in October, Elon Musk confirmed that AI5 was designed for redundancy and higher safety. Tesla has been working on parallelizing FSD computations for some time—but we’ll explore that in a separate article.
Wrapping it Together
Putting it all together: the bumper camera has arrived, and Tesla doesn’t add hardware without a purpose. While it’s not yet in use for FSD, Tesla is likely gathering footage to train future models. The AI4 computer has the compute power to handle an additional data stream—but will Tesla actually integrate it?
If we were to go out on a limb, we’d say that Unsupervised FSD will likely require a bumper camera to be part of the Robotaxi network, but there’s another compelling reason. Tesla currently offers Supervised FSD for subscription and sale - but you can no longer buy FSD the way it used to be marketed. That changed back in September 2024, when they made adjustments to all their websites globally to list Supervised FSD as the product and feature that was being sold. That could have an impact on Tesla’s future plans with how they offer Unsupervised vs Supervised FSD.
So, will a bumper camera be necessary? We think so. Will it be a retrofit? Possibly. Tesla has already confirmed that they will retrofit HW3 vehicles with improved hardware in the future, meaning that other FSD hardware upgrades aren’t completely off the table.
However, retrofitting a bumper camera is complex, requiring extensive disassembly, wiring through the frunk, firewall, and into the AI computer. It’s possible that the camera will primarily be used for low-speed parking lot maneuvers—where supervision will be required—while Unsupervised FSD will only be available while driving on city streets and highways. The largest issue is simply what happens if the vehicle was asleep, and it can now only leave by driving forward — where it has a large blind spot.
While the exact role of the front bumper camera remains uncertain, its presence in newer models suggests it could be critical for a. fully autonomous vehicle. Whether it becomes a requirement for the vehicle to start driving from a parked position without anyone inside the vehicle, or whether it’ll only be required in parking lots or even to become a part of the Robotaxi network remains to be seen.
Tesla’s latest software update, version 2025.2, brings new features to Service Mode, continuing the trend of improving in-vehicle diagnostics.
Currently, this feature is only available for vehicles with AMD Ryzen infotainment systems and requires Service Mode+, which is a subscription service aimed at technicians. Intel-powered vehicles aren’t supported yet, but we expect this feature to roll out to them as well unless hardware limitations prevent it.
Thanks to Spencer for providing an image of the panel in action.
Signal Viewer Panel
While update 2025.2.6 adds four service mode improvements, including updates to brake burnishing, charge port calibration, and noise recording panel improvements. In this article, we’ll focus on the new signal viewer panel.
This new panel offers a live data feed from selectable vehicle sensors. You select the signals you’re interested in and it’ll plot the signal on a graph. In addition to allowing you to view real-time signal data, it also allows you to record them.
The signals are searchable and can be easily added or removed from the panel. You can track up to 10 sensors, and the UI allows three of them to be viewable at once.
However, this is Service Mode and it’s more than just pretty looks. You can really dig down into these charts. You can pan them left and right through time, and tapping a specific point shows the exact value of that signal. The panel also supports pinch-to-zoom, enabling you to adjust the time scale across all panels simultaneously.
This feature is exclusive to Service Mode+, which requires a subscription to Tesla’s ToolBox3 software and a connection to a computer. It’s designed to help technicians diagnose issues related to signal quality, noise in the vehicle’s electrical systems, and signal variance in components during driving.