The National Highway Traffic Safety Administration has upgraded its investigation into Tesla’s Full Self-Driving system to an engineering analysis — the final regulatory step before a recall order. The probe now covers approximately 3.2 million vehicles.

The problem is deceptively simple. FSD relies entirely on cameras. When those cameras can’t see — fog, rain, sun glare, airborne dust — the system is supposed to detect its own degradation and hand control back to the driver. According to NHTSA, it doesn’t.

“Available incident data raise concerns that Tesla’s degradation detection system fails to detect and/or warn the driver appropriately under degraded visibility conditions such as glare and airborne obscurants,” the agency stated. In crashes reviewed by regulators, the system either failed to recognize impaired visibility entirely or issued alerts only immediately before impact — far too late for a human to react.

Nine crashes have been linked to the failures so far, including one fatality.

The investigation covers FSD’s degradation detection “both as originally deployed and later updated,” a pointed detail suggesting Tesla’s software patches haven’t resolved the core issue. Tesla’s own post-incident analysis indicated that a software update may have affected three of the nine incidents had it been installed — a concession that the original system shipped with a known gap.

Unlike Waymo and other competitors, Tesla has bet exclusively on camera-based vision, with CEO Elon Musk repeatedly insisting lidar is unnecessary. That architectural choice is now the subject of federal scrutiny.

Tesla did not respond to Reuters’ request for comment.

For Tesla owners using FSD on public roads, the practical takeaway is stark: the system marketed as “Full Self-Driving” cannot reliably tell when it’s driving blind. Until regulators resolve this probe — and a recall remains firmly on the table — that gap between branding and capability is not just misleading. It’s a safety defect.

Sources