Shutterstock.com_2107018964/CrizzyStudio
10 October 2025news

Red light for Tesla as US regulators start autonomous vehicle investigation

The National Highway Traffic Safety Administration (NHTSA), part of the US Department of Transportation, has opened an investigation into traffic safety violations of cars that have full self driving (FSD) engaged, that include ones made by Tesla.

The NHTSA’s Office of Defects Investigation (ODI) is opening a preliminary evaluation (PE) to assess the scope, frequency, and potential safety consequences of FSD executing driving manoeuvres that constitute traffic safety violations. This investigation concerns versions of FSD that Tesla has labelled as "FSD (Supervised)" and "FSD (Beta)." 

Tesla characterizes FSD as an SAE Level 2 partial automation system requiring a fully attentive driver who is engaged in the driving task at all times. Level 2 partial automation systems are designed to support and assist the driver in performing certain aspects of the driving task, requiring a driver to supervise and intervene as necessary.  The driver remains fully responsible at all times for driving the vehicle, including complying with applicable traffic laws. ODI’s investigation will therefore focus, in particular, on whether certain driving inputs within the control authority of FSD forestall the driver’s supervision when they are unexpectedly performed.

ODI stated that it has identified a number of incidents in which the inputs to the dynamic driving task commanded by FSD induced vehicle behaviour that violated traffic safety laws. Although reports of this nature span a variety of behaviours, the reports appear to most commonly involve two types of scenarios. The first type of scenario involves a vehicle operating with FSD proceeding into an intersection in violation of a red traffic signal. The second type of scenario involves FSD commanding a lane change into an opposing lane of traffic.

With respect to the first type of scenario, ODI has identified 18 complaints and 1 media report alleging that a Tesla vehicle, operating at an intersection with FSD engaged, failed to remain stopped for the duration of a red traffic signal, failed to stop fully, or failed to accurately detect and display the correct traffic signal state in the vehicle interface. Some complainants also alleged that FSD did not provide warnings of the system's intended behaviour as the vehicle was approaching a red traffic signal.

ODI added that it has identified six standing general order (SGO) reports in which a Tesla vehicle, operating with FSD engaged, approached an intersection with a red traffic signal, continued to travel into the intersection against the red light and was subsequently involved in a crash with other motor vehicles in the intersection. Of these incidents, four crashes resulted in one or more reported injuries. At least some of the incidents appeared to involve FSD proceeding into the intersection after coming to a complete stop. ODI's pre-investigative work, including coordination with the Maryland Transportation Authority and State Police, indicated that the problem may be repeatable, given that multiple subject incidents occurred at the same intersection in Joppa, Maryland. NHTSA understands that Tesla has since taken action to address the issue at this intersection.

With respect to the second type of scenario, ODI has identified 2 SGO reports, 18 complaints, and 2 media reports alleging that a Tesla vehicle, operating with FSD engaged, entered opposing lanes of travel during or following a turn, crossed double-yellow lane markings while proceeding straight, or attempted to turn onto a road in the wrong direction despite the presence of wrong-way road signs. Likewise, ODI has identified 4 SGO reports, 6 complaints, and 1 media report alleging that a Tesla vehicle, operating with FSD engaged, proceeded straight through an intersection in a turn-only lane or executed a turn at an intersection in a through lane despite the presence of lane markings or signals. Complaints also alleged that FSD did not provide warnings of the system's intended behaviour. Some complaints alleged that more than one of these failures occurred and, as such, the numbers are not cumulative. Some of the reported incidents appeared to involve FSD executing a lane change into an opposing lane of travel with little notice to a driver or opportunity to intervene.

ODI’s review will assess whether there was prior warning or adequate time for the driver to respond to the unexpected behaviour or to safely supervise the automated driving task. This review will assess any warnings to the driver about the system's impending behaviour; the time given to drivers to respond; the capability of FSD to detect, display to the driver, and respond appropriately to traffic signals; and the capability of FSD to detect and respond to lane markings and wrong-way signage. NHTSA's review will also consider any updates or modifications to the system(s) that may affect the performance of FSD with respect to obeying traffic safety laws and signals.

The ODI’s assessment will focus, in particular, on the types of traffic safety violations described above, as most reports identified thus far have centred around those behaviours. While the behaviours under investigation appear to occur most frequently at intersections, NHTSA’s investigation will encompass any other types of situations in which this behaviour may arise, such as when traveling adjacent to a lane of opposing traffic or when approaching railroad crossings. If other evidence received during this investigation involve other types of traffic safety violations, those may be considered as part of this assessment as well.

The investigation will be followed with close attention by many in the vehicle insurance industry, due to the implications over who is to blame for an accident involving a driver in a self-driving car, due to the legal ramifications. According to some reports in the media a shift away from the driver being responsible to the car (and therefore potentially towards the manufacturer) could impact the insurance market by moving claims away from personal lines toward product liability, professional indemnity, or manufacturer-level captive insurance. Risk management considerations would therefore have to look at AI-driven software systems, which can be extremely complex.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.