Rimac announces the Verne robotaxi, which looks similar to Tesla’s concepts
MotorTrend
Rimac, the company behind the Rimac Nevera electric hypercar, has announced that it intends to produce a robotaxi, and it looks quite similar to Tesla’s concepts. Much of what we’ve heard about Tesla’s upcoming robotaxi, the Cybercab, is featured in Rimac’s autonomous vehicle. From the two seats to the airy interior and the center-screen-focused interior, it’s all here, although there are significant differences as well. Rimac’s prototype, called Verne, was revealed on Wednesday, June 26th.
Verne Robotaxi
Verne will include a 43
MotorTrend
The Verne is expected to begin operation in 2026 and is a two-seater robotaxi using Mobileye’s LiDAR technology. The vehicle is expected to be a level 4 autonomous vehicle, which means it would still require remote support for handling complex situations, similar to Waymo’s work in San Franciso.
The Verne has a 43” display, and 17 speakers, and is supposedly designed to emulate “a room on wheels”, with an inside-out design concept. Interestingly, rather than regular doors, the Verne has doors that swing forward horizontally, along with a keypad-based entry system.
A smaller screen between the front seats lets you control certain aspects of the vehicle
MotorTrend
Rimac says they have signed agreements to launch in 11 cities in the EU, the UK, and the Middle East. They have also mentioned they are negotiating contracts with 30 more cities worldwide.
Rimac also showed off images of its robotaxi app and a concept building for its robotaxis – presumably a charging and service hub.
The verne will feature sliding doors, a lot like a minivan
MotorTrend
Comparing Rimac’s Robotaxi to Tesla’s
Although Tesla has yet to reveal the Cybercab, there are several things Tesla has already talked about for their upcoming robotaxi. One key difference between Rimac’s vision and Tesla’s is that Tesla appears to be chasing the cheapest possible transport, with Tesla previously touting ride prices that would rival bus ticket prices. While Rimac appears to focus more on an ideal experience. While everyone loves extra luxury, at the end of the day, price usually wins.
The Rimac robotaxi app
MotorTrend
One example is Tesla’s single center screen, compared to Rimac’s two screens. In addition to the viewable 43” center display, which presumably is not a touch-screen, Rimac has a separate screen and controls between both passenger seats. Tesla’s approach appears to focus on a single screen, with the user controlling much of the car’s control such as music and climate through Tesla’s robotaxi app.
Another example is Rimac’s idea of including an entry pad and screen on the outside of the vehicle for passenger to be able to unlock the vehicle. Tesla’s approach to unlocking a vehicle is expected to rely on temporary keys that are tied to user’s phones leveraging ultra wideband, a lot like how Tesla’s phone keys work today on newer vehicles.
Tesla’s approach to autonomy is also drastically different than Mobileye’s, which relies on radar, LiDAR and more cameras than Tesla’s Autopilot suite today.
Viability
This announcement from Rimac is a bit of an oddity. As a company, Rimac has produced less than 150 vehicles in their short lifespan – all hand-designed and hand-produced Rimac Nevara hypercars. Their ability to scale to produce more than a handful of these Verne robotaxis, while visually appealing, is questionable at best.
On the same front, Rimac recently received a $200M Euro grant from the EU as part of a package to develop an economic recovery plan for Croatia. Rimac has also received $80M Euros in funding from Hyundai and Kia – but that was to collaborate on a high-performance fuel cell electric vehicle, and a high-performance EV sports car.
The exterior of the Verne robotaxi
MotorTrend
Beyond that, Rimac has never done any work with autonomy – the self-driving tech that is running the Verne is entirely based on the outsourced work from Mobileye. It seems that the Verne will serve as Mobileye’s real-life test on whether its technology can be integrated into a Robotaxi platform on its own.
Tesla previously used Mobileye’s technology for its own autonomy during its inception years (AP 1) but quickly moved on towards using its own vision-based camera tech instead.
Back in 2021, while Giga Berlin was still undergoing construction, Elon Musk said that he wanted to fill the factory with graffiti artwork. Just months later, Tesla posted a submission link to find local artists for the project.
It remained relatively quiet for about two years until Musk resurfaced with a post congratulating the team on their progress—and revealing that the factory’s concrete would be entirely covered in art. By 2023, that vision was already taking shape. Tesla began by collaborating with local artists, who created much of the artwork seen in the 2023 image above.
The Giga Berlin West Side in 2023
Not a Tesla App
Graffiti at Scale
More of the awesome digital artwork
@tobilindh on X
As expected from Tesla, they didn’t just hire a group of artists to paint and scale the walls. True to their ethos of autonomy, robotics, and innovation, they sought a more futuristic approach. The local crews couldn’t work fast enough or cover enough ground, so Tesla did what it does best—push the boundaries of technology.
Covering an entire factory in art is a massive undertaking, especially when that factory spans 740 acres (1.2 sq mi / 3 km²). With such an immense canvas, Tesla needed a high-tech solution.
Enter a graffiti start-up that had developed a robotic muralist. Tesla partnered with the company, sourcing digital artwork from independent artists while also commissioning pieces from its in-house creative team. Armed with this collection, the robot meticulously printed the artwork directly onto the factory’s concrete, turning Gigafactory Berlin-Brandenburg into a futuristic masterpiece.
The Robot
Not a Tesla App
This ingenious little robot is equipped with a precision printhead and a sophisticated lifting mechanism. It moves using two kevlar cables that allow it to glide up, down, left, and right while a pair of propellers generates downforce to keep it steady against the wall.
The printhead itself is capable of painting approximately 10 million tiny dots per wall, adding up to a staggering 300 million dots just for the west-facing side of Giga Berlin. Each mural features five distinct colors, and the robot carries 12 cans of paint, ensuring it can keep working for extended periods without interruption.
Check out the video below to see the robot action, along with mesmerizing time-lapse footage of the printing process. It’s an exciting glimpse into how Tesla is blending technology and creativity at Giga Berlin—and we can’t wait to see what’s next.
With FSD V13.2.6 continuing to make its way to AI4 vehicles, Tesla has been on a streak with minor FSD improvements since the launch of FSD V13 just a little over two months ago.
FSD V13 brought a new slate of features, including Start FSD from Park, Reverse, and Park at Destination. It also introduced full-resolution video input using the AI4 cameras at 36hz and made use of the new Cortex supercomputer to get faster and more accurate decision-making.
So, what’s next with FSD V14? Tesla gave us a sneak peek at what’s next for FSD.
FSD V14
The standout feature of FSD V14 will be auto-regressive transformers. While that’s a complex term for those unfamiliar with AI or machine learning, we’ll break it down.
Auto-Regressive
An auto-regressive transformer processes sequential data in time, using that information to predict future elements based on previous ones. Imagine completing a sentence: You use the words already written to guess what comes next. This process isn't just about filling in the blank; it's about understanding the flow of the sentence and anticipating the speaker's intent.
FSD could analyze a sequence of camera images to identify pedestrians and predict their likely path based on their current movement and surrounding context. The system's auto-regressive nature allows it to learn from past sequences and improve its predictions over time, adapting to different driving scenarios.
Today, FSD reacts to what it sees, but soon it’ll be able to anticipate what will help, much like humans.
Transformers
The second part of that term is transformer, which is a component used to understand the relationships of elements inside a time sequence. It identifies which parts of the input are most crucial for making accurate predictions, allowing the system to prioritize information much like a human would. Think of it as weighing different pieces of evidence to arrive at a conclusion. For example, a transformer might recognize that a blinking turn signal is more important than the color of the car when predicting a lane change.
Putting It Together
Putting all that together, Tesla’s use of auto-regressive transformers means they’ll be working on how FSD can predict the plans and paths of the world around it. This will improve FSD’s already powerful perception and allow it to predict how other vehicles and vulnerable road users (VRUs) will behave.
What it all comes down to is that FSD will be able to make better decisions and plan its paths by making more informed, human-like decisions. That will be a big step towards improving V13 - which already has some very effective decision-making.
Larger Model and Context Size
Ashok Elluswamy (Tesla’s VP of AI) stated that FSD V14 will see larger model and context sizes in FSD V14, which coincidentally are listed in the upcoming improvements section of FSD V13.2.6. If we compare what Ashok said to what’s listed in the upcoming features section, the model and context sizes should grow by 3x.
Interestingly, Ashok says that AI4’s memory limits context size. Context is essentially the history of what the vehicle remembers, which is used for future decisions. Since this information is stored in memory, it’ll always be limited by memory, but it’s worth noting that Ashok mentioned that Tesla is restricted by the memory in the AI4 computer.
Leverage Audio Input
Tesla is already gathering audio data in existing FSD versions so that it can start training models with audio as well, truly making FSD more human-like. According to Ashok, FSD V14 will be the first version to take advantage of audio input for FSD driving. This will primarily be used for detecting emergency vehicles, but we can see this expanding to other sounds that help humans adjust their driving, such as car crashes, loud noises, honking, etc. At the very least, FSD could be more cautious when hearing a noise that matches an accident or vehicle honking.
FSD V14 Release Date
We haven’t heard from Elon Musk or Ashok Elluswamy about when FSD V14 will arrive. Ashok previously stated that FSD V13.4 would see audio inputs being used, but at Tesla’s earnings call, Tesla said that audio input would become relevant in V14, making it seem like Tesla may scrap V13.4 for V14.
Since Tesla is planning to launch their Robotaxi network in Texas this June, which is just four months away, FSD V14 may be the version used for its autonomous taxi fleet.