In a recent series of posts on X, George Hotz, a figure renowned for his contributions to hacking and autonomous driving technology, provided a sobering perspective on the state of autonomous vehicles. Contrary to beliefs held firmly by Tesla and, of course, Elon Musk, Hotz suggests that full autonomy remains a distant goal, potentially over a decade away. This stance is a departure from the optimistic timelines often presented by major players in the AV industry.
The Path to Full Autonomy: A Decade of Development
Hotz acknowledges that while significant milestones will mark the journey to full autonomy, the ultimate goal remains elusive. The distinction between a vehicle's ability to predict and to act autonomously in a loop is vast. Hotz emphasizes that a fully autonomous car must exhibit "agentic behavior" far beyond anything currently demonstrated by existing technologies.
We'll get many useful intermediates on the path to full autonomy in the next 10 years, but the difference between predicting and acting in loop is huge.
A fully autonomous car is an agent so far beyond any agentic behavior I have seen to date. Sorry about bruising your hype.
Despite the long road ahead, Hotz is optimistic about the advancements that will emerge in the interim. Technologies allowing "eyes off" driving on highways could become available within the next decade. These limited-scope capabilities will pave the way for more sophisticated systems, gradually bridging the gap towards full autonomy.
Tesla's Profitability vs. Waymo's Vision
Hotz's commentary began as a response to contrasting views regarding Tesla and Waymo's progress toward full autonomy. While some believe Tesla is on the brink of achieving fully autonomous driving, others see Waymo as the front-runner. However, Hotz points out a critical distinction: Tesla's profitability as a company compared to Waymo's ongoing financial challenges. This underscores the complexity of achieving autonomy from a technical standpoint and as a viable business model.
Understanding Human Behavior: The Ultimate Challenge
Hotz concurs that autonomous vehicles must understand general human and world behavior to operate without geofencing. This level of comprehension is critical, as it encompasses learning from both on-road experiences and off-road simulations. Developing a "general agent" capable of navigating the myriad scenarios drivers encounter daily is fundamental to achieving true autonomy.
Exactly. Limited scope will be sub 10, like eyes off driving on highway. But a fully autonomous car needs to be a general agent.
It's a long road, but I think it's sub 20 years for human level agentic AI. I've devoted my life to this problem.
Hotz reveals a deep commitment to solving the autonomous driving puzzle in his discourse. Estimating that human-level agentic AI could be a reality in less than two decades, he shares his dedication to contributing towards this ambitious goal. His perspective not only tempers the prevailing "hype" around autonomy but also highlights the magnitude of the challenge ahead.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
Last week, Mark Ruber, an engineering YouTuber best known for his glitter bombs, released a video where he tested Tesla's Autopilot against various conditions - including the iconic ACME painted wall.
During this test, many people noted that Mark was using Autopilot rather than FSD, even though his video was titled “Can you Fool a Self-Driving Car?”. The Tesla on Autopilot went up against a vehicle equipped with Luminar’s LIDAR rig, running some sort of basic autonomy or safety software.
New Video Tests FSD
Many people were disappointed with Mark’s video and his testing methods, so several creators got to work to actually test out Tesla’s FSD.
Creator Kyle Paul over on X made a much better follow-up video, using both a HW3 Model Y as well as an AI4 Cybertruck. In a relatively unsurprising turn of events, the Cybertruck was successfully able to detect the wall, slowed down, and came to a stop. The Cybertruck was running FSD 13.2.8.
Kyle’s team did a fantastic job building the wall and testing this in a private area using FSD rather than Autopilot. On top of that - they re-tested the results several times and recorded the entire thing in and out. While Mark’s video was more for entertainment, Kyle really set out to prove what would really happen in this unlikely scenario.
Sadly, the HW3 Model Y was unable to detect the wall, and manual intervention was required in each test. While the Model Y was running FSD 12.5.4.2 rather than an FSD V12.6 build, we don’t expect this to have had a significant impact on the test - this is more of an issue with how computer vision analyzes the environment.
There are several major differences between HW3 and HW4. The first is obviously that the version that runs on AI4 is more advanced, as the hardware is capable of processing a lot more data. However, AI4 also features much higher-resolution cameras than HW3, and Tesla recently added the ability for the video feeds to be processed at full resolution on FSD V13. This could have made the difference, although it’s not entirely clear. Perhaps if HW3 gets a version of FSD V13 in the future, HW3 can be retested to see if it passes the “ACME wall” test.
Watch
Kyle’s entire video is below. It’s only 10 minutes long, so definitely give it a watch. Props to Kyle on the quick and thorough execution.
What Does This Mean for FSD?
We broke down Mark’s test - and examined all the little issues that we discovered after doing some in-depth research - you can read our analysis here.
Putting aside the issues with Mark’s testing and instead using the new results - it seems that if you were to have to fight against Wile-E-Coyote and his ACME tools with your Tesla, cartoon logic may win if you’re on an HW3 vehicle. If you’re on an AI4 vehicle, you’ll likely come to a safe stop.
Vehicle depth perception is definitely something that Tesla has been hard at work to improve - and some fairly drastic improvements came with FSD V13 that haven’t been entirely translated to FSD V12 just yet. Future versions of HW3 FSD may be able to determine that the wall is there successfully. So Kyle - if you’re reading this - don’t get rid of that wall. We’d love to see more testing in the future.
However, this entire test scenario is so out of left field… there is a good likelihood this same test would fool some human drivers as well. The most important part is that the future of autonomy will not fall for these tricks, so it's very unlikely for someone to weaponize this idea as it’d only possibly work on a small segment of vehicles.
If Wile-E-Coyote is after you, someone else may drive into the wall before your Tesla does.
Not a Tesla App
We’re not kidding, this really happened already. This isn’t a realistic scenario outside of someone trying to play an insane prank - but it’s good to know that FSD V13 is capable of dodging this.
Tesla regularly holds quarterly all-hands meetings for employees, but last night marks the first time Tesla has live-streamed the event for the public.
The meeting primarily focused on Tesla’s employees, recent achievements, and the future of Tesla. While it didn’t reveal much new information, it was interesting to see Elon Musk candidly engaging with his teams, who seem to genuinely enjoy working with him. Still, there were a few noteworthy takeaways.
As with Tesla’s Earnings Calls and other live events, we’ve put together a concise, easy-to-digest recap of everything discussed.
General Points
Work-related Injuries declined over time
Planning to expand to new markets
Cell Manufacturing
Continuing to invest in battery supply
Cheapest, lowest cost per kWh cell
The supercharger network continues to grow
Vehicle range and charging speed should match the needs of humans and their required time for breaks