OK, so we can finally stop whaling on Tesla after this post. The reason they’re so prominent is because they’ve made themselves that way. The frontrunner gets the glory but also the blame and I’m sure Elon Musk was very familiar with that by the time he founded Tesla. The company’s ability to remain positive, focused, and well-liked in turbulent times shows that there are some very experienced PR people on their end. Since their cases are the foundation of what has become policy and practice in the industry, they get a lot more attention than most. We may be seeing that change as more AVs get on the road, but for now both confirmed AV driver fatalities and the suspected one in China were all in Teslas. They also have by far the greatest number of publicized road collisions. Naturally, they’d be the focus of 3 of the 4 parts so far and the special.
Just as a note, however, in California alone there have been 76 AV collision reports, most of them by GM. CA is the only State with mandatory reporting for AVs, and these include a fair number of cases where the AV was hit while stationary, so there isn’t much to look at in terms of policy implications apart from the very useful data set it makes for those who want to cast AV developers in a favorable light.
Getting on to 2017 and 2018, then. 2017 was a relatively quiet year for Tesla. Despite the lawsuit, Tesla got good press throughout the year. NHTSA gave its SUV maximum score on all tests, including the rollover test which many SUVs don’t do well on. The 2017 NHTSA guidelines came out – which I dealt with in the previous post – and they were very industry friendly overall.
So, 2017 was looking pretty good for them. Then came 2018 and several more high-profile collisions.
The first was January 2018 in California and resulted in no injuries. The car slammed into the back of a parked fire truck on a freeway, damaging the back of the truck, which was fortunately only occupied in front. The truck, which was parked in the left hand shoulder while attending another accident, was struck from behind by the car traveling at 65 mph. The NTSB investigation is reportedly ongoing, though no documents appear on a search on the NTSB site. As such, it cannot be confirmed which version of Autopilot was in use during this crash. It appears, however, that whatever version was in use, it had yet to solve the persistent technical issues which caused the crashes mentioned in 2016 in previous posts on this blog.
Next came in April. An Apple engineer named Walter Huang was driving his Telsa down a busy highway in the HOV lane, which was second lane from the left. The car began to follow a car ahead into the far left exit lane, but stopped in the middle of a gore area dividing the HOV lane from the exit lane as it went into a ramp divided from the highway by a gap and a crash attenuator. To make it easy to visualize, I’ve gotten this Street View image from Google.
The viewpoint in this picture is of a car in the exit lane. What happened was that the Tesla made it as far as the area delineated by two solid lines and then remained there. It accelerated from 62 to around 70 mph and then crashed into the attenuator (yellow object at end of concrete divider). The attenuator had been damaged from a collision the month before, and so provided less cushion than it was supposed to. It was, in fact, almost completely crushed already. The Tesla was ripped in half and impacted two other cars before coming to a halt. The battery was on fire for several minutes and Huang survived long enough to be put in an ambulance, but died in the local hospital he was brought to.
The Autopilot had provided several hands-off warnings during the trip, but none in the 15 minutes prior to the collision. Huang’s hands were detected on the wheel for 34 of the 60 seconds leading up to the crash. For 6 seconds, however, just before the collision, his hands were not detected.
Before we continue with the events as they unfolded, I’ll note that the preliminary report from the NTSB doesn’t show what the Tesla was ‘thinking’ during the period it started to accelerate. I can take a few guesses from my experience with path planning and sensor fusion. First, as well documented, Autopilot has difficulty registering stationary objects ahead of the car when at highway speed. We can then assume that the attenuator wasn’t even a factor in its decision tree. It either didn’t see it, or assumed that it was something the driver would take care of – as noted in earlier posts, when it comes to stationary objects, the manual tells drivers that it’s their responsibility. Fine, then the question becomes: why did it remain in the gore rather than continuing to follow the lead vehicle?
It probably thought it was where it should be. It detected an open lane ahead and was told to only move over one lane. As soon as I saw that stretch of road, I said to myself ‘if I or most of the people I know had programmed an algorithm for this, it would probably mistake that gore for a lane.’ It’s really simple: looking at it you can see that the righthand white line was heavily weathered until it looked vaguely like a dashed line, but the righthand side is in relatively good shape. It looks to any machine algorithm like a lefthand lane with a shoulder delineated by a solid white line. It wouldn’t have followed the other car because it had already moved over once into what it perceived to be a lane.
But what if it did recognize both lines as solid? Then it might have stayed because it was programmed to view crossing a solid line as illegal. It had no option but to stay put because it believed it was in a stretch of road where it couldn’t move over. This is similar to the first concept because again it can’t go left over a solid line, but this time it also can’t go right for the same reason.
Either way, it believed it was in a lane and since the lines on both sides pointed straight ahead, it had no reason to believe that the ‘lane’ terminated in a wall. That’s just not part of the consideration on a freeway.
Moving on to what happened next. There was the usual burst of news coverage, which likely would have died down if not for two big developments. First, Mr. Huang’s family hadn’t stepped forward to say that he’d had issues with that stretch of road in the past. They said he’d been to Tesla’s service center several times, and said that the car would try to drive towards the barrier 70% of the time he used Autopilot there. Tesla responded that they had no record of the complaint and that they have an average of 200 successful Autopilot trips on that stretch of road every day. Owners of Teslas online seemed to mostly side with the company, wondering why he’d even use Autopilot there if he had such persistent issues.
The news stories in major outlets were usually either neutral or positive with respect to Tesla and Autopilot, though even the positive ones sometimes noted that Tesla’s stock had fallen 15% after the crash. It has more than recovered in the intervening months.
However, many of the news stories a week after the crash were written because Tesla had released data on the crash, including the facts that the car had been in Autopilot (at the time, it was only conjecture in the media that this was the case), that Huang had been given several warnings prior to the collision, and he had not been holding the wheel. These facts which I cite above from the NTSB report rather than from Tesla’s releases, and which rightly should have come from the NTSB. It’s understandable that as consumer confidence was shaken they would wish to mitigate, but it was an even bigger issue since Tesla had gone further than just their own data and had suggested that it was Huang’s fault for not paying attention, and that the crash’s severity was partially due to the damaged crash attenuator. Preempting the NTSB by publicly releasing crash data is a big problem for investigators and releasing conclusions is almost unheard-of. It undermines public trust in the investigation because some people will wonder why the NTSB takes so long, and in Tesla because they’re an interested party. A conclusion that suggests no wrongdoing by the company was likely anyway, so they only look like they’re trying to cover for something by getting in before the Feds. This is borne out by the preliminary report from the NTSB which I cited earlier. Yes, Huang was given warnings and yes, his hands weren’t on the wheel 6 seconds before the collision. However, these warnings happened 15 minutes before the crash and not in the seconds immediately preceding it.
The NTSB explained all of this themselves two weeks later when they revoked Tesla’s party status on the investigation. Tesla, they said, had violated the party agreement by releasing the data. They also noted that they expected Tesla to continue assisting them by providing data at request (only Tesla can pull the log files off their hardware) and that Tesla still had party status on the other open collision investigations (they mention the January collision, but there appears to be no record of it on their site). Tesla, however, claimed that they withdrew from the agreement. The revocation of party status is a big thing, though, because it’s a rift that may pose difficulties in the next collision. What if Tesla refuses to provide the information? Unlikely, but possible. Also, with them out of the picture, the NTSB may release a more negative final report since Tesla won’t be able to mitigate by providing details during the process.
Not long after Tesla released its blog claiming that that stretch of road was safe for Autopilot, other Tesla drivers showed their own cars drifting towards meridians like the one Huang had hit, and in one case the very same crash attenuator. There are some good discussions of radar on the reddit comments for the second video. The behavior is consistent with what I’d expect from modern path planning algorithms. The car sees the lefthand line disappear and believes it has drifted too far right or that the lane is curving. It tries to find a lefthand line, does so, and that places it right inside a crash zone. Or it tries to cross a zone like that, finds itself between two solid lines, and believes it is required to stay there until the lines go away. It doesn’t detect the barrier because the relative velocities are wrong for anything its programmed to notice, or the sensors just don’t see the barrier for a similar reason. Between the issues with AP1 and AP2, I’d say your best bet is to remain in center lanes whenever possible. Tesla may have tried to paint an unrealistically positive view on this, but they’re right that all the AVs at present require supervision. They’re not very open with examples of why, which might be the wrong approach. Seatbelt safety was often achieved with PR campaigns showing what happens when you don’t use them. It’s a narrow path. Tesla doesn’t want to prime their customers with a belief that their Autopilot can’t do something now because it will almost certainly get past that issue later.
Believing that the Autopilot was at least in part at fault based on Huang’s (alleged since I’ve not seen confirmation) reports to the service center, Huang’s family filed a wrongful death lawsuit which is still ongoing and has not developed significantly since I expect all parties are waiting for the final NTSB report.
In May, Tesla accelerated into a firetruck while its owner was looking at their phone and not holding the wheel. The owner broke her foot and there were no other injuries.
The Huang fatality was different, though. It was different from all the other crashes that have made the headlines. Not just because other owners showed that their own Autopilots might do the same. It was different mainly for who was at the wheel. The ‘operator error’ label wears thin when the operator is a software engineer at Apple who had previously been a Senior Software Engineer at EA. This is someone who according to his family knew the limitations of Autopilot in the specific location it crashed and had experience with software design. It is likely that though he may not have known the technical ins-and-outs of machine learning and path planning, he at least knew enough about software in general to know what kind of system Autopilot was. The fact that he appeared complacent enough to continue using it the way he did suggests that there was something else at work. It is very hard to say ‘he didn’t understand’ or ‘he should have known to pay attention’ under these circumstances. We may never know why, but he trusted his Autopilot enough to leave it on and let go of the wheel in a location he knew was dangerous.
His trust and the fact that Autopilot’s safeguards appear too easy to either circumvent or ignore is an ongoing issue for Tesla. The company’s aim is to try to introduce as few intrusive aspects as possible, making the Autopilot system flexible with the assumption that the driver is paying attention. By making it possible to use Autopilot anywhere and actively opposing adding sensors to help track driver engagement, Tesla is also following its customers desire. The ‘nag factor’ is low in Tesla because their customer base agrees with management – even to the point of trying to get past the nags already in place. Tesla drivers are ready to take the risk. If they weren’t, they’d be driving Cadillacs.
Speaking of which, we’ll be covering Cadillac’s system in the next post, and then on to part 5 of this series where we look at another high profile collision of 2018 (there sure have been a lot!), this one especially unique for being the only recorded incident of a pedestrian fatality with an AV rather than a driver.
But before I go, one interesting note. The NHTSA put out a cease and desist order on the Autopilot Buddy. This is interesting because the DOT in 2016 – as I covered – claimed not to have the regulatory tools to cover AVs. In this case, they justify their authority by calling it after-market hardware, and justify their decision by saying it disables a safety feature. A safety feature that they don’t have the authority to regulate. I’m sure the company that makes these items doesn’t have the pockets for a court case, which is a shame because I’d love to see how that argument played out in an extended battle. Can NHTSA regulate hardware? Yes. Can they regulate hardware which has the sole intent of altering the functionality of non-covered software? I don’t know. After all, if Tesla itself disabled the safety features and went back to hands-free and attention-free Autopilot, could NHTSA or NTSB order them to put it back? If not, can they really claim to have authority here? As long as Congress remains gridlocked on giving them the tools – and the administration remains disinclined to ask for them – it’s a distinctly gray area.