NTSB Report On Tesla Autopilot Accident Shows What's Inside And It's Not Pretty For FSD

NTSB

It seems the world picks on Tesla for crashes, and the release of the NTSB report on a January 2018 Model S Autopilot crash has generated lots of commentary and analysis.

We pick on Tesla because Autopilot is out there driving far more miles than other systems, and it's an incomplete driver assist system being monitored by ordinary drivers, so it's going to have a lot more crashes. As a driver assist system, fault for most of these crashes still lies with the supervising driver, but ever since Tesla declared that Autopilot is on the cusp of morphing into a "full self driving" product of some type later this year, it's been natural to examine the sort of mistakes it's making. Tesla Full Self Driving will be a new system, but it will almost surely use most of the core components found in Autopilot. How those components perform, and how well Tesla improves them, are areas of serious inquiry.

When the NTSB gets involved -- as it has in several Tesla crashes -- we get a window into the internals of these crashes we don't otherwise get. Because of NTSB rules, Tesla is not allowed to talk until the investigation is over, and the investigations into fatalities are involved and still going on. This crash did not involve injuries, but did involve the model S crashing into the back of a parked fire truck that was deliberately blocking the left (carpool) lane to allow crews to help a victim of a motorcycle accident in safety.

This is a scary situation for all vehicles. The fire truck was deliberately stopped in the lane. It angled itself slightly so it would not look like it was actually using the lane, making it clear, to humans at least, that it was deliberately closing the lane. Even so, the Tesla found itself following another vehicle which blocked the Tesla's view of the situation. That leading car, seeing the fire truck, changed quickly into the lane to the right as expected -- suddenly revealing the parked fire truck situation with about 4 seconds to act.

The Tesla Autopilot and its driver did not react until about 0.5 seconds before the crash. Or rather, it initially reacted in the worst way, by speeding up. Fortunately, it was going only 20mph in the traffic jam, and only got up to 30mph before hitting the truck.

Teslas, and systems like them, have problems with vehicles stopped on the road ahead of them, especially when they are revealed with little warning. They have this problem for a few reasons:

  1. Radar is excellent at tracking moving objects (like the vehicle the Tesla was following in traffic.)
  2. Radar also sees stalled vehicles just fine, but because it only has a rough idea where the radar returns are coming from, it faces a problem. It also gets returns from everything else -- guardrails, signs, bridges, road debris and more -- all of which are stationary on the Earth. When it gets a radar return from a stopped fire truck, it has a tough time knowing if that's just the return from the guardrail. It can't brake every time it gets a radar return from a stopped object. It has to be more discerning.
  3. In this special case, because the truck was tilted, it is possible its radar returns would be weaker than usual due to the angle. The flat back of trucks and cars, and especially the shiny metal license plate, give wonderful radar reflections.
  4. Cameras see, and computer vision tries to recognize things like a truck. They also have an easier time if those things are moving. They do that by learning from huge numbers of tagged pictures of cars and trucks. It may be a problem that most of those pictures do not have a fire truck parked at an angle. It may have had some trouble identifying it that day.
  5. A stereo camera (binocular vision) should have been able to identify the truck as blocking the road 3 seconds out. Tesla does not use stereo.
  6. Of course, a LIDAR would have easily detected the truck as soon as the other vehicle veered out of the way. Elon Musk has declared LIDAR a fool's errand and Tesla does not use it. It should be noted that there is no production LIDAR at consumer costs which Tesla could put in cars today, or when this model S was made.

As such, when the lead car veered off, the Tesla decided that now the lane in front of it was suddenly wide open. It calculated there was nobody in front for 120 meters and it should immediately speed up. (I think that Tesla's TACC is a little too eager about that in general. This speed-up also happened in the tragic fatality in Silicon Valley.)

As it sped up, it finally detected that the truck was there. New interpretation of the camera and radar data made it decide that the radar return from the truck it had been deciding was not really there was something to worry about. It issued the "forward collision warning" beeps to the driver, who did not react. The Automatic Emergency Braking did not activate yet -- it usually gives the driver some time to react first.

Wham.

There's also speculation about what the driver was doing. The driver understood Autopilot. There are accusations that the driver was distracted, possibly looking at a phone, and not looking up in the seconds leading to the crash, according to a witness in the next lane. Clearly, the driver didn't do his job here.

The report also contains the now very common incorrect interpretation of Tesla's system to detect hands on the wheel, making claims that "The system detected driver's hands on the steering wheel for only 78 seconds out of 29 minutes and 4 seconds during which the Autopilot was active."

Tesla does not have a system to detect hands on the wheel. Instead, it detects the application of modest steering force on the wheel. You can, and many drivers do, keep their hands on the wheel without applying steering force. They may only briefly apply some force every so often to keep Tesla's system happy and avoid the warnings. This is not at all out of the ordinary. The driver was holding the wheel, he says, in what might be deemed a fairly light way, with hand resting on his knee and holding the wheel to be able to torque it from time to time. This is not what one would normally recommend as a "ready to grab" position, but it is what some people do.

I've noticed a disturbing pattern in these incorrect reports about whether a driver had his or her hands on the wheel of a Tesla. I have to wonder why Tesla doesn't correct this error. The cynic in me wonders if they might like the error, because it makes the drivers who have crashes sound more negligent than they may have been.

That said, there have been calls for Tesla to improve their system of assuring driver attention, as other carmakers have done. This could include using the internal camera to track the gaze of the driver, and know when they have not looked at the road for too long, which is again something others have done.

Since I often write about the limited use of "connected vehicle" concepts I should point out that this accident is an example of something that should make use of connectivity.   Not "vehicle to vehicle" but rather "vehicle to cloud" -- the fire truck should record that it is stopping to block the lane in a public database subscribed to by cars approaching that area.  (In fact, the 911 operators should have created such an entry.)  You may know that this already happens with tools like "Waze" but using slower, and less accurate human reporting.   In fact, it's entirely possible that in this situation Waze was giving its "vehicle stopped ahead" warnings (which annoyingly do not reveal what side or lane) but Tesla has no way to use them.

Conclusions

Stopped vehicles continue to be a problem for systems based only on camera+radar. LIDAR easily solves this problem. Maps can also assist greatly with this problem, by telling you the places where fixed objects will produce radar returns that look like stopped vehicles. Tesla avoids both technologies. They believe they will produce computer vision systems using their new hardware that can calculate how far away everything is in a camera image by understanding the image. At present this is not reliable.

This is not the first Autopilot crash in this precise situation -- you're following a car which veers to the right, revealing something ahead on the road. Tesla knows about this problem and is not making enough progress. In addition to the obvious step of improving perception, Tesla could decide to treat the "car I am following suddenly veers away" as a special caution situation. It should not accelerate so quickly in that situation. It could pay more attention to radar returns from stopped objects in that situation.  It could be cautious about conclusions that the road ahead is suddenly clear when the traffic is actually thick. It might even leave immediate accelerations decisions to the driver.

Their Full Self Drive system will have better perception and more compute power. But it won't have better sensors and they say it won't have maps. They have said they will shortly have a driver-monitored full self-drive system soon (and it is rumored to be in their version 10 software release.) They have also said it won't need (in a technical, not regulatory sense) that driver monitoring next year. The signs don't point to this.

Read/leave comments on this site

An earlier version of this article incorrectly dated the accident in 2019.

">
NTSB

It seems the world picks on Tesla for crashes, and the release of the NTSB report on a January 2018 Model S Autopilot crash has generated lots of commentary and analysis.

We pick on Tesla because Autopilot is out there driving far more miles than other systems, and it's an incomplete driver assist system being monitored by ordinary drivers, so it's going to have a lot more crashes. As a driver assist system, fault for most of these crashes still lies with the supervising driver, but ever since Tesla declared that Autopilot is on the cusp of morphing into a "full self driving" product of some type later this year, it's been natural to examine the sort of mistakes it's making. Tesla Full Self Driving will be a new system, but it will almost surely use most of the core components found in Autopilot. How those components perform, and how well Tesla improves them, are areas of serious inquiry.

When the NTSB gets involved -- as it has in several Tesla crashes -- we get a window into the internals of these crashes we don't otherwise get. Because of NTSB rules, Tesla is not allowed to talk until the investigation is over, and the investigations into fatalities are involved and still going on. This crash did not involve injuries, but did involve the model S crashing into the back of a parked fire truck that was deliberately blocking the left (carpool) lane to allow crews to help a victim of a motorcycle accident in safety.

This is a scary situation for all vehicles. The fire truck was deliberately stopped in the lane. It angled itself slightly so it would not look like it was actually using the lane, making it clear, to humans at least, that it was deliberately closing the lane. Even so, the Tesla found itself following another vehicle which blocked the Tesla's view of the situation. That leading car, seeing the fire truck, changed quickly into the lane to the right as expected -- suddenly revealing the parked fire truck situation with about 4 seconds to act.

The Tesla Autopilot and its driver did not react until about 0.5 seconds before the crash. Or rather, it initially reacted in the worst way, by speeding up. Fortunately, it was going only 20mph in the traffic jam, and only got up to 30mph before hitting the truck.

Teslas, and systems like them, have problems with vehicles stopped on the road ahead of them, especially when they are revealed with little warning. They have this problem for a few reasons:

  1. Radar is excellent at tracking moving objects (like the vehicle the Tesla was following in traffic.)
  2. Radar also sees stalled vehicles just fine, but because it only has a rough idea where the radar returns are coming from, it faces a problem. It also gets returns from everything else -- guardrails, signs, bridges, road debris and more -- all of which are stationary on the Earth. When it gets a radar return from a stopped fire truck, it has a tough time knowing if that's just the return from the guardrail. It can't brake every time it gets a radar return from a stopped object. It has to be more discerning.
  3. In this special case, because the truck was tilted, it is possible its radar returns would be weaker than usual due to the angle. The flat back of trucks and cars, and especially the shiny metal license plate, give wonderful radar reflections.
  4. Cameras see, and computer vision tries to recognize things like a truck. They also have an easier time if those things are moving. They do that by learning from huge numbers of tagged pictures of cars and trucks. It may be a problem that most of those pictures do not have a fire truck parked at an angle. It may have had some trouble identifying it that day.
  5. A stereo camera (binocular vision) should have been able to identify the truck as blocking the road 3 seconds out. Tesla does not use stereo.
  6. Of course, a LIDAR would have easily detected the truck as soon as the other vehicle veered out of the way. Elon Musk has declared LIDAR a fool's errand and Tesla does not use it. It should be noted that there is no production LIDAR at consumer costs which Tesla could put in cars today, or when this model S was made.

As such, when the lead car veered off, the Tesla decided that now the lane in front of it was suddenly wide open. It calculated there was nobody in front for 120 meters and it should immediately speed up. (I think that Tesla's TACC is a little too eager about that in general. This speed-up also happened in the tragic fatality in Silicon Valley.)

As it sped up, it finally detected that the truck was there. New interpretation of the camera and radar data made it decide that the radar return from the truck it had been deciding was not really there was something to worry about. It issued the "forward collision warning" beeps to the driver, who did not react. The Automatic Emergency Braking did not activate yet -- it usually gives the driver some time to react first.

Wham.

There's also speculation about what the driver was doing. The driver understood Autopilot. There are accusations that the driver was distracted, possibly looking at a phone, and not looking up in the seconds leading to the crash, according to a witness in the next lane. Clearly, the driver didn't do his job here.

The report also contains the now very common incorrect interpretation of Tesla's system to detect hands on the wheel, making claims that "The system detected driver's hands on the steering wheel for only 78 seconds out of 29 minutes and 4 seconds during which the Autopilot was active."

Tesla does not have a system to detect hands on the wheel. Instead, it detects the application of modest steering force on the wheel. You can, and many drivers do, keep their hands on the wheel without applying steering force. They may only briefly apply some force every so often to keep Tesla's system happy and avoid the warnings. This is not at all out of the ordinary. The driver was holding the wheel, he says, in what might be deemed a fairly light way, with hand resting on his knee and holding the wheel to be able to torque it from time to time. This is not what one would normally recommend as a "ready to grab" position, but it is what some people do.

I've noticed a disturbing pattern in these incorrect reports about whether a driver had his or her hands on the wheel of a Tesla. I have to wonder why Tesla doesn't correct this error. The cynic in me wonders if they might like the error, because it makes the drivers who have crashes sound more negligent than they may have been.

That said, there have been calls for Tesla to improve their system of assuring driver attention, as other carmakers have done. This could include using the internal camera to track the gaze of the driver, and know when they have not looked at the road for too long, which is again something others have done.

Since I often write about the limited use of "connected vehicle" concepts I should point out that this accident is an example of something that should make use of connectivity.   Not "vehicle to vehicle" but rather "vehicle to cloud" -- the fire truck should record that it is stopping to block the lane in a public database subscribed to by cars approaching that area.  (In fact, the 911 operators should have created such an entry.)  You may know that this already happens with tools like "Waze" but using slower, and less accurate human reporting.   In fact, it's entirely possible that in this situation Waze was giving its "vehicle stopped ahead" warnings (which annoyingly do not reveal what side or lane) but Tesla has no way to use them.

Conclusions

Stopped vehicles continue to be a problem for systems based only on camera+radar. LIDAR easily solves this problem. Maps can also assist greatly with this problem, by telling you the places where fixed objects will produce radar returns that look like stopped vehicles. Tesla avoids both technologies. They believe they will produce computer vision systems using their new hardware that can calculate how far away everything is in a camera image by understanding the image. At present this is not reliable.

This is not the first Autopilot crash in this precise situation -- you're following a car which veers to the right, revealing something ahead on the road. Tesla knows about this problem and is not making enough progress. In addition to the obvious step of improving perception, Tesla could decide to treat the "car I am following suddenly veers away" as a special caution situation. It should not accelerate so quickly in that situation. It could pay more attention to radar returns from stopped objects in that situation.  It could be cautious about conclusions that the road ahead is suddenly clear when the traffic is actually thick. It might even leave immediate accelerations decisions to the driver.

Their Full Self Drive system will have better perception and more compute power. But it won't have better sensors and they say it won't have maps. They have said they will shortly have a driver-monitored full self-drive system soon (and it is rumored to be in their version 10 software release.) They have also said it won't need (in a technical, not regulatory sense) that driver monitoring next year. The signs don't point to this.

Read/leave comments on this site

An earlier version of this article incorrectly dated the accident in 2019.

I founded ClariNet, the world's first internet based business, am Chairman Emeritus of the Electronic Frontier Foundation, and a director of the Foresight Institute. M...