Welcome back to the “driverless vehicle gets into an accident, then what happens?” discussion. For our second example, let’s look at how even when an autonomous car is completely stopped, it’s news when it’s in an accident.
Google/Waymo cars have been hit at intersections often enough, but that hasn’t hit the news in quite a long time. Instead, we’ll be looking at the Hop On shuttle, a Navya bus which was sponsored by AAA, the Regional Transportation Commission of Southern Nevada (RTC) and Keolis North America, a company which provides rail and bus services in the US. Rides are free and the route is a small loop of 3 blocks one way, turn, one block, three blocks the other way, turn, and then back to the start. The vehicle is an autonomous bus with no pedals, or wheel, but it does have an emergency stop and joystick inside, as well as interior cameras for remote operation. One of the fun features is that the bus is symmetrical, so it doesn’t ‘back up’ so much as just reverse which direction it considers ‘forward.’
A fun, free demo, with minimal risk, although there is actually no ‘driver,’ only the attendant who can stop the vehicle in emergencies and do limited piloting. The attendant is mainly there for emergencies as mentioned before, and in cases where the bus hits a situation its software hasn’t been set up to deal with, like broken traffic light. I mentioned the issue of deploying a completely driverless vehicle in the previous post, though in the context of unrestrained city and highway driving. In the case of an 8-intersection loop running only during the daytime, it’s unlikely that there would be any serious issues, even though the algorithms are mainly optimized for highly controlled and limited situations like this one. I believe that most of these buses are running as shuttles on private property.
What occurred was that on the first day that the shuttle ran (11/8/17) a truck was backing up and didn’t see the bus. There was traffic behind it, so it couldn’t back up to try to avoid the truck. (I should mention, however, that witnesses say that there was more than 5 meters of space behind the bus, so it’s likely that it could have backed up, but was prevented from doing so by a safety precaution that limits how close it can get to other cars. These kinds of ‘boxes’ are quite common, and a result of understandable caution on the part of engineers, stemming from sensors not always getting distances quite right.) It even blew its horn and didn’t get a response! The truck hit the bus, denting the fender of the bus but with no injuries or harm otherwise. At no point did the bus move during any of this. Thus, it was ruled the fault of the truck driver (who was even cited with a misdemeanor) and if this were not an AV, the collision would have been just another day in the city.
It was an AV, however, and led to journalists and major news media outlets like the Washington Post producing headlines like “A driverless bus got into a crash during its first day on the job” and at the Guardian “Self-driving bus involved in crash less than two hours after Las Vegas launch” . In both cases, the articles themselves were entirely factual, but the comments on the articles suggested that there were quite a few people who read no further than the headline. It also resulted in an NTSB investigation. I have tried to find out what the result was, but have been unable to find a reference to it on the NTSB web page.
While for the most part there was no result apart from outrage-inducing headlines, there is a good question that was brought up during this process and then lost in the usual fight over whether AVs are good things or not. It’s that – as I have noted before – there are no well defined and adopted standards in the industry. Is 20 feet (the distance between the AV and the car behind it) too much or just enough distance that it decided it couldn’t back up? Was endangering the occupants by inaction the right thing, or should it have taken the small risk of action and backed up with the possibility that its sensors were wrong and the car behind it was closer? Jim McPherson, a lawyer who has set up a firm to deal with AV law brings up the excellent point that the bus avoided liability by not moving, but by doing the perfectly legal thing, may not have done the safe thing.
The difficulty for AV companies is real; the Google car from example 1 got into an accident shortly after they implemented a function which mimics human drivers but has nothing to do with classical collision avoidance or rules of the road. The current wisdom is to protect the occupant of the vehicle and to follow the law. The first is because anything outside of the vehicle is seen through fallible sensors, and so the only human the car knows exists is inside, so if given a choice it will always act to preserve that human. Following the law perfectly, meanwhile, may not save the most lives and money, but it means that the makers can never be held legally liable. It may not seem very nice, but they’d argue that they have to protect themselves, and if lawful behavior isn’t for the best, then change the law; don’t expect an axiomatic system like a machine intelligence to bend the law because if it does it once it’ll do it again and perhaps in less beneficial circumstances. Better to err on the side of over-caution and have larger safe distance boxes than to possibly run into something or someone. Sensor ghosts are real problems and sometimes aren’t even ghosts, as we’ll see in the next two or three examples.