Tesla has started rolling out Full Self-Driving (Supervised) v14.3 to HW4 vehicles, and the headline change is under the hood: Tesla rewrote the AI compiler and runtime from scratch on MLIR, which the automaker says delivers a 20% faster reaction time. The update, shipping as software version 2026.2.9.6, also brings a new parking spot pin on the map, better behavior around emergency vehicles and school buses, and Tesla’s first public acknowledgement that it’s leaning on MLIR — the compiler infrastructure built by Chris Lattner, who briefly led Tesla Autopilot back in 2017. What’s new in FSD v14.3 Here are Tesla’s official release notes for Full Self-Driving (Supervised) v14.3, shipping on build 2026.2.9.6 for HW4 Model S, 3, X, Y, and Cybertruck: Upgraded the Reinforcement Learning (RL) stage of training the FSD neural network, resulting in improvements in a wide variety of driving scenarios. Upgraded the neural network vision encoder, improving understanding in rare and low-visibility scenarios, strengthening 3D geometry understanding, and expanding traffic sign understanding. Rewrote the AI compiler and runtime from the ground up with MLIR, resulting in 20% faster reaction time and improving model iteration speed. Mitigated unnecessary lane biasing and minor tailgating behaviors. Increased decisiveness of parking spot selection and maneuvering. Improved parking location pin prediction, now shown on a map with a P icon. Enhanced response to emergency vehicles, school buses, right-of-way violators, and other rare vehicles. Improved handling of small animals by focusing RL training on harder examples and adding rewards for better proactive safety. Improved traffic light handling at complex intersections with compound lights, curved roads, and yellow light stopping — driven by training on hard RL examples sourced from the Tesla fleet. Improved handling for rare and unusual objects extending, hanging, or leaning into the vehicle path by sourcing infrequent events from the fleet. Improved handling of temporary system degradations by maintaining control and automatically recovering without driver intervention, reducing unnecessary disengagements. Tesla also lists three items under “Upcoming Improvements” that are not yet in this build: Advertisement - scroll for more content Expand reasoning to all behaviors beyond destination handling. Add pothole avoidance. Improve driver monitoring system sensitivity with better eye gaze tracking, eye wear handling, and higher accuracy in variable lighting conditions. The release builds on FSD v14 and v14.2 — the first end-to-end neural-net releases to ship on HW4 at scale — and does not include any HW3 support. AI4 (HW4) remains the only hardware path forward for FSD updates. The MLIR rewrite — and a nod from Chris Lattner The single most interesting line in the release notes is the one about the compiler: “Rewrote the AI compiler and runtime from the ground up with MLIR, resulting in 20% faster reaction time and improving model iteration speed.” MLIR (Multi-Level Intermediate Representation) is a compiler infrastructure project under the LLVM Foundation, originally started at Google and now widely used across the ML industry to compile neural networks down to specific hardware. It was created by Chris Lattner, the same engineer who built LLVM, Clang, and Apple’s Swift programming language — and who very briefly ran Tesla’s Autopilot software team in early 2017 before leaving after about six months. Lattner weighed in on the v14.3 notes on X shortly after the rollout started: “Cool to see that Tesla Full Self Driving has adopted the @LLVMFoundation MLIR stack, and is seeing 20% faster reaction time as a result. It is quite likely that a modern compiler and runtime implementation the break-through that robotaxi and FSD have been waiting for!” Coming from Lattner, that’s not a throwaway endorsement. He knows the Autopilot stack from the inside, he built the compiler framework Tesla is now running on, and he is arguably the most credible person on the planet to judge whether a 20% reaction-time gain from a compiler swap is plausible. He clearly thinks it is. A 20% latency reduction is a big deal for a driving stack. Reaction time is the gap between the cameras seeing something and the car acting on it, and shaving it down means the same neural net can brake earlier, swerve sooner, and handle edge cases that previously arrived at the planner a few frames too late. Parking, emergency vehicles, and fewer disengagements Beyond the compiler, the user-visible changes in v14.3 mostly target the two areas where FSD still frustrates owners the most: parking and weird edge cases. The new parking spot pin on the map, combined with “increased decisiveness of parking spot selection and maneuvering,” is Tesla’s attempt to fix the behavior where the car would roll into a lot and hesitate between spaces. The “P” icon now tells you where the car thinks it’s going to park before it gets there. The enhanced response to “emergency vehicles, school buses, right-of-way violators, and other rare vehicles” and the improved handling of small animals are the kind of long-tail fixes that only come from mining fleet data for rare events — which is exactly what Tesla says it did for this release. The note on “temporary system degradations” recovering without driver intervention is also notable, because those are the kind of fleeting camera or compute hiccups that have historically triggered unnecessary disengagements. Tesla also quietly renamed “Autopilot” to “Self-Driving” across most of the UI in this update — the Autopilot tab under Controls is now “Self-Driving,” and “Autopilot Features” is now “Self-Driving Features,” with TACC, Autosteer, and FSD underneath. Electrek’s Take The MLIR rewrite is the most substantive thing in this release, and it’s also the most honest. Tesla almost never talks publicly about its software infrastructure, and when it does, it’s usually vague marketing language about “neural nets” and “end-to-end.” Shipping a release note that names a specific open-source compiler project and attaches a concrete 20% number to it is unusually specific for Tesla — and it’s the kind of claim that the compiler community, including people like Chris Lattner, can actually evaluate. We should be careful about what “20% faster reaction time” does and doesn’t mean. It’s an inference-latency improvement on the same hardware, not a capability jump. It does not make FSD supervised-to-unsupervised. It does not close the gap with Waymo, which is running a genuinely driverless commercial service in multiple cities while Tesla is still shipping a Level 2 system that requires an attentive driver. Compilers don’t solve the hard part of autonomy — the hard part is the behavior the neural net produces, not how fast it produces it. But latency is the kind of boring engineering problem that compounds. If Tesla really did get 20% back from a compiler rewrite, that’s margin it can spend on other things, and Lattner, who would know, clearly thinks it matters. The interesting question is whether Tesla can keep finding gains like this or whether v14.3 is the easy win before the curve flattens again. The entire process has been two steps forward, one step back, and it feels like we are still thousands of steps until Tesla delivers what it sold to customers: unsupervised autonomy. Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.