We have been hearing about Tesla’s Robotaxi concept for several years, but it seems like we may finally be getting close to this vehicle becoming a reality. Here is everything we know about the Robotaxi.
Official Reveal
Yesterday, Musk officially announced on X that Tesla would unveil the Tesla Robotaxi on August 8th, 2024. Tesla last unveiled a new vehicle back in November 2019 when they showed off the Cybertruck for the first time. Before that, they unveiled the Roadster 2.0 and the Tesla Semi at the same event in 2017, so these certainly special times that only come across once every few years.
While it's always possible that Tesla may have to move the Robotaxi's unveil date, it's exciting to think that Tesla may be just four months from unveiling this next-gen vehicle.
Robotaxi and Next-gen Vehicle
Another piece of information came out about the Robotaxi yesterday when Musk reply to the post by Sawyer Merritt. Sawyer posted that Tesla's upcoming "$25k" vehicle and the Robotaxi would not only be based on the same platform, but that the Robotaxi would essentially be the same vehicle without a steering wheel. Musk replied to the post with a simple "looking" emoji.
While it's not surprising that two of Tesla's smaller upcoming vehicles are going to be built on the same platform, it's a little more interesting that Musk chose to reply with that emoji when the post talks about the Robotaxi being the "Model 2" without a steering wheel. This leads to the possibility of Tesla not only showing off the Robotaxi at the August 8th event, but also it's upcoming next-gen car.
Production Date
Back during Tesla's Q1 2022 earnings call, Musk talked a little about the timeline for Tesla's Robotaxi, stating that they plan to announce the vehicle in 2023 and begin mass production in 2024.
Given that Tesla was originally aiming for a 2023 unveil, a late 2024 date appears realistic. However, now it appears that the Robotaxi and the next-gen vehicle will share a lot in common, meaning that a production date for the Robotaxi can be similar to the next-gen vehicle, which is currently slated to begin in "late 2025".
The difficulty in releasing an autonomous taxi, as the Robotaxi is meant to be, is the self-driving aspect. While Tesla has made great strides with FSD v12, the first version to come out of "beta," it's still a level-2 system that requires active driver supervision. A fully autonomous vehicle is still a big leap from where Tesla's FSD is right now, but as we saw with the jump from FSD v11 to v12, a lot can change in the next 18 to 24 months.
While we expect Tesla to remain focused on bringing its cheaper, next-gen vehicle to market ahead of potential competitors, the Robotaxi's production date can continue to shift in line with Tesla's progress on FSD.
The history of Tesla’s Robotaxi starts with CEO Elon Musk's Master Plan Part Deux, published in 2016.
At the time the concept was touted as normal Teslas with full self-driving (FSD) capability.
Once Tesla achieved Full Self-Driving, they would create a “Tesla Network” taxi service that would make use of both Tesla-owned vehicles and customer cars that would be hired out when not in use.
Once we get to a world of "robotaxis," it makes sense to continue evolving the interior of the vehicle to suit customer needs such as adding face-to-face seating, big sliding doors providing easy access, 4-wheel steering, easier cleaning, etc.
Tesla could even create a variety of Robotaxis that help meet specific needs. For example, Tesla could offer a vehicle that is better suited for resting, which could let you sleep on the way to your destination.
Another vehicle could be similar to a home office, offering multiple monitors and accessories that let you begin working as soon as you step inside the vehicle. Features such as these could bring huge quality of life improvements for some; giving people an hour or more back in their day.
The variety of Robotaxis doesn't need to end there. There could be other vehicles that are made specifically for entertainment such as watching a movie, or others that allow you to relax and converse with friends, much like you'd expect in a limousine.
Lowest Cost Per Mile
During Tesla's Q1 2022 financial results call, Musk stated that its robotaxi would be focused on cost per mile, and would be highly optimized for autonomy - essentially confirming that it will not include a steering wheel.
“There are a number of other innovations around it that I think are quite exciting, but it is fundamentally optimized to achieve the lowest fully considered cost per mile or km when counting everything”, he said.
During the call, Tesla acknowledged that its vehicles are largely inaccessible for many people given their high cost and he sees the introduction of Robotaxis as a way of providing customers with “by far the lowest cost-per-mile of transport that they’ve ever experienced. The CEO believes that the vehicle is going to result in a cost per mile cheaper than a subsidized bus ticket. If Tesla can achieve this, it could drastically change the entire automotive industry and redefine car ownership. Is Tesla's future still in selling vehicles or providing a robotaxi service?
FSD Sensor Suite
Tesla hasn't revealed anything about the sensor suite that they're considering for the robotaxi, but given all of their work in vision and progress in FSD, it's expected to be the same or similar to what is available today, potentially with additional cameras or faster processing.
However, back in 2022, Musk gave this warning: “With respect to full self-driving, of any technology development I’ve been involved in, I’ve never really seen more false dawns or where it seems like we’re going to break through, but we don’t, as I’ve seen in full self-driving,” said Musk. “And ultimately what it comes down to is that to sell full self-driving, you actually have to solve real-world artificial intelligence, which nobody has solved. The whole road system is made for biological neural nets and eyes. And so actually, when you think about it, in order to solve driving, we have to solve neural nets and cameras to a degree of capability that is on par with, or really exceeds humans. And I think we will achieve that this year.”
With the Robotaxi unveil now approaching, it may not be long before we find out more details about Tesla's plan for the future and its truly autonomous vehicles.
Subscribe
Subscribe to our newsletter to stay up to date on the latest Tesla news, upcoming features and software updates.
As December approaches, Tesla’s highly anticipated Holiday update draws closer. Each year, this eagerly awaited software release transforms Tesla vehicles with new features and festive flair. If you’re not familiar with Tesla’s holiday updates, take a look at what Tesla has launched in the Holiday update the past few years.
For this chapter in our series, we’re dreaming up ways Tesla could improve the charging experience and even add some additional safety features. So let’s take a look.
Destination State of Charge
Today, navigating to a destination is pretty straightforward on your Tesla. Your vehicle will automatically let you know when and where to charge, as well as for how long. However, you’ll likely arrive at your destination at a low state of charge.
Being able to set your destination state of charge would be an absolute game-changer for ease of road-tripping. After all, the best EV to road trip in is a Tesla due to the Supercharger network. It looks like Tesla may be listening. Last week, Tesla updated their app and hinted at such a feature coming to the Tesla app. A Christmas present, maybe?
Battery Precondition Options
While Tesla automatically preconditions your battery when needed for fast charging, there are various situations where manually preconditioning the battery would be beneficial.
Currently, there is no way to precondition for third-party chargers unless you “navigate” to a nearby Supercharger. If you need to navigate to a Supercharger that’s close by, the short distance between your location and the Supercharger will also not allow enough time to warm up the battery, causing slower charging times.
While we already mentioned Live Activities in the Tesla app wishlist, they’d be especially useful while Supercharging. Live Activities are useful for short-term information you want to monitor, especially if it changes often — which makes them perfect for Supercharging, especially if you want to avoid idle fees.
Vehicle-to-Load / Vehicle-to-Home Functionality
The Cybertruck introduced Tesla Power Share, Tesla’s name for Vehicle-to-Home functionality (V2H). V2H allows an EV to supply power directly to a home. By leveraging the vehicle’s battery, V2H can provide backup power during outages and reduce energy costs by using stored energy during peak rates.
Tesla Power Share integrates seamlessly with Tesla Energy products and the Tesla app. We’d love to see this functionality across the entire Tesla lineup. Recently a third party demonstrated that bidirectional charging does work on current Tesla vehicles – namely on a 2022 Model Y.
Adaptive Headlights for North America
While Europe and China have had access to the Adaptive Headlights since earlier this year, North America is still waiting. The good news is that Lars Moravy, VP of Vehicle Engineering, said that these are on their way soon.
Blind Spot Indication with Ambient Lighting
Both the 2024 Highland Model 3 Refresh and the Cybertruck already have ambient lighting features, but they don’t currently offer a practical purpose besides some eye candy. So why not integrate that ambient lighting into the Blindspot Warning system so that the left or right side of the vehicle lights up when there’s a vehicle in your blind spot? Currently, only a simple red dot lights up in the front speaker grill, and the on-screen camera will also appear with a red border when signaling.
Having the ambient lighting change colors when a vehicle is in your blind spot would be a cool use of the technology, especially since the Model Y Juniper Refresh and Models S and X are supposed to get ambient lighting as well.
Tesla’s Holiday update is expected to arrive with update 2024.44.25 in just a few short weeks. We’ll have extensive coverage of its features when it finally arrives, but in the meantime, be sure to check out our other wishlist articles:
It’s time for another dive into how Tesla intends to implement FSD. Once again, a shout out to SETI Park over on X for their excellent coverage of Tesla’s patents.
This time, it's about how Tesla is building a “universal translator” for AI, allowing its FSD or other neural networks to adapt seamlessly to different hardware platforms.
That translating layer can allow a complex neural net—like FSD—to run on pretty much any platform that meets its minimum requirements. This will drastically help reduce training time, adapt to platform-specific constraints, decide faster, and learn faster.
We’ll break down the key points of the patents and make them as understandable as possible. This new patent is likely how Tesla will implement FSD on non-Tesla vehicles, Optimus, and other devices.
Decision Making
Imagine a neural network as a decision-making machine. But building one also requires making a series of decisions about its structure and data processing methods. Think of it like choosing the right ingredients and cooking techniques for a complex recipe. These choices, called "decision points," play a crucial role in how well the neural network performs on a given hardware platform.
To make these decisions automatically, Tesla has developed a system that acts like a "run-while-training" neural net. This ingenious system analyzes the hardware's capabilities and adapts the neural network on the fly, ensuring optimal performance regardless of the platform.
Constraints
Every hardware platform has its limitations – processing power, memory capacity, supported instructions, and so on. These limitations act as "constraints" that dictate how the neural network can be configured. Think of it like trying to bake a cake in a kitchen with a small oven and limited counter space. You need to adjust your recipe and techniques to fit the constraints of your kitchen or tools.
Tesla's system automatically identifies these constraints, ensuring the neural network can operate within the boundaries of the hardware. This means FSD could potentially be transferred from one vehicle to another and adapt quickly to the new environment.
Let’s break down some of the key decision points and constraints involved:
Data Layout: Neural networks process vast amounts of data. How this data is organized in memory (the "data layout") significantly impacts performance. Different hardware platforms may favor different layouts. For example, some might be more efficient with data organized in the NCHW format (batch, channels, height, width), while others might prefer NHWC (batch, height, width, channels). Tesla's system automatically selects the optimal layout for the target hardware.
Algorithm Selection: Many algorithms can be used for operations within a neural network, such as convolution, which is essential for image processing. Some algorithms, like the Winograd convolution, are faster but may require specific hardware support. Others, like Fast Fourier Transform (FFT) convolution, are more versatile but might be slower. Tesla's system intelligently chooses the best algorithm based on the hardware's capabilities.
Hardware Acceleration: Modern hardware often includes specialized processors designed to accelerate neural network operations. These include Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs). Tesla's system identifies and utilizes these accelerators, maximizing performance on the given platform.
Satisfiability
To find the best configuration for a given platform, Tesla employs a "satisfiability solver." This powerful tool, specifically a Satisfiability Modulo Theories (SMT) solver, acts like a sophisticated puzzle-solving engine. It takes the neural network's requirements and the hardware's limitations, expressed as logical formulas, and searches for a solution that satisfies all constraints. Try thinking of it as putting the puzzle pieces together after the borders (constraints) have been established.
Here's how it works, step-by-step:
Define the Problem: The system translates the neural network's needs and the hardware's constraints into a set of logical statements. For example, "the data layout must be NHWC" or "the convolution algorithm must be supported by the GPU."
Search for Solutions: The SMT solver explores the vast space of possible configurations, using logical deduction to eliminate invalid options. It systematically tries different combinations of settings, like adjusting the data layout, selecting algorithms, and enabling hardware acceleration.
Find Valid Configurations: The solver identifies configurations that satisfy all the constraints. These are potential solutions to the "puzzle" of running the neural network efficiently on the given hardware.
Optimization
Finding a working configuration is one thing, but finding the best configuration is the real challenge. This involves optimizing for various performance metrics, such as:
Inference Speed: How quickly the network processes data and makes decisions. This is crucial for real-time applications like FSD.
Power Consumption: The amount of energy used by the network. Optimizing power consumption is essential for extending battery life in electric vehicles and robots.
Memory Usage: The amount of memory required to store the network and its data. Minimizing memory usage is especially important for resource-constrained devices.
Accuracy: Ensuring the network maintains or improves its accuracy on the new platform is paramount for safety and reliability.
Tesla's system evaluates candidate configurations based on these metrics, selecting the one that delivers the best overall performance.
Translation Layer vs Satisfiability Solver
It's important to distinguish between the "translation layer" and the satisfiability solver. The translation layer is the overarching system that manages the entire adaptation process. It includes components that analyze the hardware, define the constraints, and invoke the SMT solver. The solver is a specific tool used by the translation layer to find valid configurations. Think of the translation layer as the conductor of an orchestra and the SMT solver as one of the instruments playing a crucial role in the symphony of AI adaptation.
Simple Terms
Imagine you have a complex recipe (the neural network) and want to cook it in different kitchens (hardware platforms). Some kitchens have a gas stove, others electric; some have a large oven, others a small one. Tesla's system acts like a master chef, adjusting the recipe and techniques to work best in each kitchen, ensuring a delicious meal (efficient AI) no matter the cooking environment.
What Does This Mean?
Now, let’s wrap this all up and put it into context—what does it mean for Tesla? There’s quite a lot, in fact. It means that Tesla is building a translation layer that will be able to adapt FSD for any platform, as long as it meets the minimum constraints.
That means Tesla will be able to rapidly accelerate the deployment of FSD on new platforms while also finding the ideal configurations to maximize both decision-making speed and power efficiency across that range of platforms.
Putting it all together, Tesla is preparing to license FSD, Which is an exciting future. And not just on vehicles - remember that Tesla’s humanoid robot - Optimus - also runs on FSD. FSD itself may be an extremely adaptable vision-based AI.