Wrapping up (for now) the series on autonomous vehicle collisions, we have the incident I’ve spent the most time studying and thinking about because it’s had such far-reaching implications. It was such a big story with so many twists and turns that I’ve decided it would be better to release in sections, so here’s section 1.
The final post in this series (until something else happens) will come as no surprise to anyone with even a cursory interest in AVs. The pedestrian fatality in Tempe; the only confirmed pedestrian death involving an AV. I’m going to skip ahead briefly to the NTSB’s preliminary report to present the facts as they appeared on the night of the accident, and then go back to the beginning of coverage and discussion. By going week by week and sometimes day by day, I’ll walk through the evolution of how a tragic death of a pedestrian turned into a whirlwind of change for one of the most prominent AV developers – Uber.
It was about 10 at night in March of this year when a woman was walking her bicycle across Mill Avenue in Tempe, Arizona. Ignoring signs to use the crosswalk 100 meters away she walked across Mill Avenue in an area between the streetlights. An autonomous Uber with a test operator was driving a preplanned test loop along the Tempe streets and had already completed one full circuit. The car had been in autonomous mode for 19 minutes when it struck the pedestrian, killing her. There was minimal damage to the Uber and no harm to the test driver. One of the outward facing cameras captured the moments before the collision and an inward facing camera shows the operator looking down frequently at something before the crash occurred. When the police arrived, the driver said she was monitoring the AV’s interface – a requirement for the job so that drivers can flag mistakes the car makes so the engineers can improve the algorithm.
Uber AVs have significant sensor coverage, including a top-mounted LIDAR. The car’s algorithm went through several classifications of what it saw in the road, including another car, an unknown object, and finally a bicycle. This is relatively normal since as it gets closer, the sensor image becomes more clear, and so classification becomes more accurate. At 1.9 seconds before collision, the car designated a need for emergency braking. Emergency braking having been disabled by Uber as a nuisance in autonomous mode and no alert system having been implemented, the driver was not warned and the car did not have time to initiate emergency evasion maneuvers which it has the authority to engage on its own. The car struck the pedestrian at 39 mph and the operator braked after hitting her, although she did try to pull the wheel over less than a second before impact.
So, those were the facts as they appeared to be on the day. As the story unfolded, other developments changed the story, but this is what would have been known if all the data could have been extracted from the car on that night. Before I go on, however, I should make it clear what kind of system the Uber AV is. I’ve spent several posts now on Tesla Autopilot which is solidly in level 2 of autonomy where the human driver is still absolutely necessary at all times. Uber’s system is level 3 – in theory – and so the driver is expected to engage in an emergency and in unexpected situations, but otherwise is not involved in the activity of driving. As such, they have to pay attention, but rarely actually do anything. In theory. However, taking the action of braking (or at least, emergency braking) away from the AV and not having an alert may arguably kick the AV back down to a very high level 2 since it’s a key function that has not been given over to the AV system.
Now to the timeline.
The fatality occurred the night of March 18th. By the 19th, the articles were already surprisingly vehement – in contrast to the articles written about Tesla collisions. Some of this might be because it was a pedestrian that was killed, and some may be because Uber has a bit of a different reputation in business. In the last couple of weeks, the shine has been taken off of Elon Musk, but four months ago Tesla was very well thought of and its leadership pretty well respected. Uber, on the other hand, was respected technically, but in the wider media had taken a beating from reports of harassment and discrimination in the workplace, bans in multiple countries and municipalities over allegations of unfair trade practices, and disputes over whether Uber drivers are being given fair compensation for their work. Uber was a high tech company with private investment funding and a stupendously high burn ratio. It progressed fast, hard, and energetically to try to keep ahead and it often needed to attract more investment to keep going. Its highest overhead was its drivers and it was going into AV in a big way to try to get ahead of Tesla, Google, and the big automakers. In the new economy where owning a vehicle was no longer necessary, Uber wanted to be the go-to app to call a car. That said, Uber had gotten some good press in the past after a collision in Tempe which flipped one of its cars over. In this crash, it was the other vehicle that was at fault. This spawned at least one opinion piece in Wired praising AVs and opining that we needed them as soon as possible to prevent more accidents like the one the Uber AV had been involved in.
Unlike with Tesla, many of the articles that came out in the aftermath of this fatality focused on regulation as much as technology and the facts of the case. Rightly or wrongly the media narrative for Uber had gone poorly in the past, and being in the position of the villain in previous stories meant that they were in for mixed press at best even before much was known about the circumstances of the incident. The day after the collision, the New York Times published an article which included a section on AZ regulation and the reaction of the Phoenix government. TechCrunch included a quote from the California DMV, a subtle dig given the shared history with Uber. Uber had previously been testing in CA, but had moved to AZ when they were ordered to stop testing on city streets without a permit. Including quotes from the regulatory agency which had censured and blocked them in the past sends a message that Uber was going to have to prove it hadn’t acted recklessly and that California wasn’t right to push them out.
On the same day as these articles were being published, a joint letter from a variety of consumer watchdogs, disability advocacy organizations, and prominent advocates was sent to the committees considering the AV START Act. The Act is a Senate bill that was introduced in September of 2017, and passed unanimously through the Commerce committee in October. The House version – the SELF DRIVE Act – had passed already when this committee was having its hearing. The letter was sent in March of this year and urged the committees and the Senate in general to delay further consideration until the NTSB had completed its investigation in Tempe. It went on to ask that they make changes to the Act to tighten the rules on exemptions and require minimum design and reporting standards. The letter also brought up the fact that the AV START Act would preempt State regulation on passing, even though the regulations of the Act would require time to draw up and implement, leaving AVs effectively unregulated for the period it would take the Department of Transportation to promulgate its rules. The swiftness of this letter might have shown just how big the incident was in the public consciousness. It wasn’t just Uber that was in trouble; it was the entire AV industry.
However, it might have also been a hasty rewrite, since on March 14, four days before the crash, five US Senators lead by the Diane Feinstein had written a letter explaining why they were voting against the Act. This vote meant it would not be fast-tracked through the Senate and instead need debate before it could be passed and likely presaged an attempt to make changes to the Act to increase oversight and regulatory powers. It was only to be expected; Senator Feinstein has expressed her conservatism (with respect to AVs) many times over the last couple years. It was unlikely that AV START would pass easily, and with the Uber crash and the recall of Takata airbags that was happening at the same time, it became likely that it would not pass for months if ever.
This might seem like a setback for industry, but given that many States (including Arizona) have quite permissive regulatory structures, it’s hard to say. The current situation puts the responsibility more in the hands of industry, so if something goes wrong they’re the ones on the hook. Some States – like California – regulate AVs more than others, but the fact remains that most State governments are hands-off. The flipside is that should the DOT be given the tools to regulate, it’s unclear what they would do with them. The current administration is difficult to predict, and industry may be hoping the Act will pass later on when they can be more certain of what degree of regulation they’ll get.
Less than two days after the collision, Stanford posted an interview with a law professor about the liability and regulations surrounding the incident. He noted that
“…if the safety driver failed to exercise reasonable care in avoiding the accident, Uber would be responsible for the driver’s negligence. If the automated features of the AV failed to note the presence of the victim, the manufacturer of the vehicle, as well as Uber, could be held responsible under product liability principles.”
He went on to point out that the pedestrian might share partial fault if they crossed in an unsafe manner.
The heat was turning up and Uber did the best thing it could in the situation. It suspended testing on all public roads in the four cities they’d rolled out to so far. This likely helped their image – as well as giving the engineers at Uber Advanced Technology Group (ATG) the chance to go over the incident with a fine-toothed comb without more data piling up. On the other hand, more data for the algorithmic engineers might not have mattered since it was unlikely that they were the department that was about to get a lot more work. It is a well known principle of machine learning algorithms that the negative set is usually nowhere near as large as the positive set. That is to say, the examples of things to do right has to be many times more numerous than examples of what to do wrong. In that way, humans and AIs are very similar. It is very likely that this was not the first time an AV – or even an Uber – had been in this situation, and given the facts so far, this was already shaping up to be not as much an algorithmic as a human-robot interface issue. After all, the car knew something was up – albeit only a second or so before the collision – but there was no means for it to warn the driver despite it being the driver who was expected to do something about it.
Tune in next time when the AV industry looks at one of their beleaguered colleagues suffering a situation that might shatter trust in everything they’ve worked years to develop … and says “who, us?”