Autonomous Car Accidents and Policy Implications – Part 3

Welcome back to the series on how accidents affect the environment of autonomous vehicles and what has changed over time.  I’ve been walking through these examples in order of how severe the reaction was, so now it’s time to look at fatalities and injuries – specifically Tesla Autopilot fatalities and injuries.  The first fatality in the US is likely why the NTSB and NHTSA takes autonomous collisions so seriously that they opened an investigation on that bus from example 2 – the one that was guilty only of being too cautious to back up when a truck hit it while stationary.  The next couple of accidents are unfortunately the opposite.

The Tesla Autopilot has had an outsize share of the injuries and deaths in the industry, but that’s because it has the largest share of use as well.  I should also note that this series is specifically looking at the reactions to and policy implications of collisions.  Even as reports of collisions both fatal and non-fatal were being published, so too were articles about Autopilot stopping one collision from happening (illustrating the safety of AVs perfectly by showing that the AV was able to stop hitting the car in front of it, but the car behind the AV wasn’t able to stop in time) and of an Autopilot system preventing the death of a pedestrian.  I include these to show that AVs do get good press even during emergencies, and that in the same time period as AVs causing harm, they were doing good and have continued to do so in subsequent years.

Before we go back to May 2016 and the first US fatality, let’s go back to January of 2016 and look at the first known fatality involving an AV – also a Tesla.

The situation was this: A young man, who reportedly had read the Autopilot manual multiple times, is driving down the highway in northeastern Hebei in the left lane.  A street sweeper truck is moving in the opposite direction – I’m not familiar with how they operate, but it appears as if the sweepers are there to clean the shoulder, which requires them to drive in the opposite direction of the lane’s traffic and be about a third of the way into the left lane of the highway.  As such, drivers in the left lane have to watch for them and move into the right lane when they see them coming.  The car in front of the Tesla does this, but the Tesla doesn’t, and strikes the sweeper – smashing the car and killing the driver.  All of this is captured on a dashcam.

There is circumstantial evidence that the car was in Autopilot, but Tesla said that there was too much damage to retrieve the logs.  The father of the driver and owner of the car has been in litigation ever since.  Multiple third parties support the assertion that the car was in Autopilot, and from the perspective of how this plays out legally and in the media, let’s assume it was.  The manual did state at the time that Autopilot struggles with stationary (and in this case, we can assume it would have the same problem with an object traveling towards the car) objects when traveling at speed.  In fact, looking at the two non-fatal collisions mentioned in the third sentence of the second paragraph, both of those collisions occurred when the Tesla hit a car which was stopped in the left shoulder but protruding into the left driving lane.

As a programmer, my guess is that – given that the Tesla in question was a Hardware 1 model – an object like a car a third of the way into the left lane would not be picked up by the forward camera and radar.  It’s hard to work out the precise angles without doing a lot of geometry, but eyeballing it, the range on the forward looks almost good enough to pick up these protrusions, but they might clip the very edges briefly and end up being classed as a glitch rather than an object at the far periphery.  Of course, by the time the ultrasonic saw what was going on, it would be too late at 50-70 mph.  If you look at where the cameras and sensors in the hardware 2 version could see and assume that HW1 was no better, then it is quite likely that objects on the side of the road were simply not in range of any sensor until the car was very close.  HW 2 has improved this greatly, but there remain blind spots and errors as we’ll see in later examples.

This means that Tesla is correct that in most of these cases the system is operating exactly as designed.  As mentioned in example 2, that’s enough to cover liability, but it isn’t enough to satisfy the public if things go wrong – and fatalities are in the extreme end of the wrongness spectrum.  The only thing worse than a driver fatality from the perspective of the public is a pedestrian fatality.  But let’s leave the sole known incident of that for a later example.  In China, the lawsuit continues, as do the investigations.  We may never know exactly what happened, and I admit to being unfamiliar with the policy intricacies of the country.  Let’s instead look at the first US fatality in an AV and the first confirmed case of a fatality in an AV while operating as such – at least until the Chinese investigation is complete.

Williston, Florida.  A former Naval ordnance disposal technician who has previously published videos of him with his hands off the wheel while in Autopilot, and who has reported that the Autopilot has saved him from accidents is driving at a reported 74 mph down US-27A (for the record, 9 miles above the speed limit.  Not a lot, but worth mentioning).  A tractor trailer truck proceeding in the opposite direction makes a left turn across the lane that the Tesla was in.  The Tesla does not stop, and goes under the truck, shearing off most of the roof of the car, then hits a lamp post.

From a technical perspective, this was a totally different case.  Instead of an obstacle being outside of the sensors’ range or the code’s ability to handle, the crash – from a purely technical standpoint – was the result of a false negative, and a situation outside of the parameters of the system.  It works like this: Most systems have multiple sensors that can observe in the same area.  In the case of a Hardware 1 Tesla, it was the camera and radar.  In other cars – including HW 2 Tesla – it’s different, but we’ll use the hardware relating to this example.

The difficulty was that the radar got a return off of the trailer – that is, the signal bounced and returned to the radar telling it an object was there.  However, the camera saw nothing.  This was because the truck was white and there was a lot of glare, causing the truck to appear to be part of the background.   Any chunks that weren’t white were too small to be considered obstacles and discounted by the system as dirt, small objects, or nothing at all.  Since the camera reported no object and the radar did, the system erred in favor of the camera.  The radar’s return, moreover, showed an object further up than most cars, so the system at most read it as a low highway sign.  Thus, the system continued as if there was nothing there because the sensors’ combined result appeared to be something too far up to trigger collision avoidance behavior.  In the aftermath, Tesla announced that they were going to rely more on the radar returns than the camera for collision avoidance, and re-tune the algorithm to make it less likely to mistake high trailers for overhead signs.

There were, of course, headlines and news reports with the expected degree of condescending skepticism about all new technology forever.  We can take that as read whenever something goes wrong.  For the most part, the media’s general approval of anything Elon Musk does has protected him and this situation was no different.  The modern heroic tech billionaire is an entirely different article, and one I might do in future.  Despite speculation that the fatal collision might derail progress, Tesla itself maintained its image and released an update that September, two months after the crash, which made it so that the Autopilot system would lock out a driver who wasn’t paying enough attention to the road.  This update was almost certainly a reaction to what Tesla already knew from their logs and which regulators were no doubt going to comment on.

What was happening was this: You’re not supposed to let go during Autopilot.  Especially in 2016, Autopilot was very experimental and even now drivers are warned in the manual to keep eyes on road and hands on wheel.  The car is programmed to check this periodically and tell the driver to put their hands back on the wheel.  In v7 – what the driver in this collision was using – all that happens is a sign comes up on your screen and then the speakers let out a low beep until you put your hands back on the wheel.  Thus there was no penalty for the driver in this case, and only a minor annoyance which could be removed by jiggling the wheel every so often.  It’s debatable whether the v8 update was enough given that jiggling the wheel was still how the vehicle registered engagement, but we’ll cover driver engagement tech later.  This won’t be the last time a driver’s lack of attention contributed to a collision.

Coincidentally, that September the NHTSA and DOT issued new policy guidance with respect to AVs.  It’s important to note that guidance is about all they can do without federal legislative action, which is unlikely in the current political climate.  However, the guidance included model bills for States, suggesting legislative actions that could be taken by each State while federal action is pending.  It noted in the guidance where current regulatory tools could be applied and what new regulatory powers they would need from Congress to most effectively monitor and regulate AVs.  This document was likely in the draft phase already, but expedited by the furor about the fatality in Florida.  It is unlikely, however, that the content was materially affected given the timeframe.  I say this because driver engagement monitoring was mentioned in only two places, and in both cases advised automakers to consider whether they were appropriate rather than stating outright that they were.  The overall theme of the document was recommendations, voluntary information sharing, and visions of how future regulation would look.

It should be noted that most State laws as of 2016 fell into 3 broad categories: Encouraging development, defining and investigating regulatory needs, and truck platooning legislation.  The only State in 2016 that had serious regulation was California, which had enacted its requirement that the Highway Patrol regulate AVs in 2012 and had only tweaked the laws since then.  Most laws were favorable, many allowed for AVs to operate without a driver, and Pennsylvania allocated money to fund the development of AVs.  There was no spate of increased regulatory burden after the collision, and if there was increased pressure to do so, it paled in comparison to the incentives to leave AVs alone or even encourage them.  As for the model bill guidance from NHTSA, the introduction includes this:

DOT strongly encourages States to allow DOT alone to regulate the performance of HAV technology and vehicles. If a State does pursue HAV performance-related regulations, that State should consult with NHTSA and base its efforts on the Vehicle Performance Guidance provided in this Policy.

In other words, “We’re going to get the regulatory tools eventually, so you’re better off regulating the way we’re going to.”  Which is sensible, since federal regulations usually override State ones, and so it only creates instability and expense for the States to disagree and then have to comply with the new rules when they’re enforced.  I’ll cover regulation itself in more depth later, but I can’t help but note that one of the regulatory tools NHTSA was asking for from Congress was the ability to increase the number of exemption vehicles, currently capped by statute at 2,500 per automaker per year.  This is because the algorithms we currently use for machine learning require a colossal number of examples and tremendous data to function well, so the more vehicles collecting information, the better they all run.  We’ll see how some consumers were less than pleased about this in the next example.

In January of 2017, NHTSA put out a report.  Despite the headline “Tesla’s Self-Driving System Cleared in Deadly Crash“, the report was an investigation of Tesla’s Autopilot as a whole.  While the 2016 crash was included as part of their data, the NHTSA was looking specifically for defects in the Autopilot system.  They found none; in every case, even those resulting in collisions, the system functioned as designed.  NHTSA even noted that Autopilot had decreased collisions by 40% in Tesla cars.  As we’ve seen above, NHTSA is very limited in what it can do right now because it doesn’t have the mandate to oversee the specifics of AV technology, only to make sure that it works as intended.  As such, their conclusion was that Tesla was clear, however they did note that improved consumer awareness of how the system worked was always helpful.

In September of 2017 – a year after the update was rolled out and 14 months after the crash – the NTSB issued a report and suggestions for regulatory guidance.  The report was pretty much that the driver was at fault.  Records indicated that the driver held the steering wheel for approximately 25 seconds over the course of 7 intervals during the 40 minutes before the crash.  The previous NHTSA study found that the driver should have had at least 7 seconds before the crash occurred to take action, but the brake was never pressed.  NTSB also noted that Autopilot at that speed is only meant for highways with ramps, not cross streets.  Again, the same refrain.  The system worked as designed.  No defect.  The subtext: Operator error.  A distracted driver using a system that wasn’t meant for the highway he was on and a truck driver who failed to yield right of way.

That’s what the conclusions seemed to say.  There were some interesting sentences in the analysis as well.

“monitoring steering wheel torque provides a poor surrogate means of determining the automated vehicle driver’s degree of engagement with the driving task.”

In other words: Tesla’s method of telling whether a person was paying attention to the road by making them jiggle the steering wheel ought to be addressed.

Toyota has deployed an eye-tracking system on its Lexus brand vehicles, and
Volvo has announced plans to use eye-tracking technology in its Driver State Estimation system.  The Driver Attention System on the 2018 Cadillac CT6 Super Cruise vehicle uses a small camera located at the top of the steering column; the camera focuses exclusively on the driver and uses infrared light to track the driver’s
head position.

The message: Other carmakers have done it, so why haven’t you?

The hands-off warning interval has been under consideration elsewhere in the world, particularly in Europe. The United Nations Economic Commission for Europe
adopted a new regulation pertaining to hands-off warning time in lane-keeping systems. This regulation, which is expected to go into effect in the first quarter of 2018, would require lane-keeping systems to provide an initial visual warning after 15 seconds of hands-off driving and deactivate in a controlled manner after 1 minute of hands-off driving.

Tesla did fix this one voluntarily before the report came out, so this sentence is aimed more at lawmakers and NHTSA than at Tesla.

If automated vehicle control systems do not automatically restrict their own operation to those conditions for which they were designed and are appropriate, the risk of driver misuse remains.

A shot across the bow to everyone: Create better ways to make sure AV systems are only used where appropriate.  The Autopilot, for example, wasn’t supposed to be used on a highway with crossroads.  The simple fix: Have the Autopilot communicate with the GPS and make sure that the car is on a road where it can safely operate.  If there is anything a State or local government might do, it will be to institute ‘no-AV’ roads.  Or ‘AV only’ lanes when these cars become widespread.  Like speed limits and limits on where trucks can use their engine brakes, municipalities will identify stretches of road where it is infeasible to use an AV below a certain SAE level.  (For those of you who don’t recall the very brief mention in an earlier article, the levels of autonomy are what NHTSA uses to class the capabilities of AV systems.  Most consumer vehicles including Tesla are Level 2: Limited turn and throttle control, driver must be engaged at all times.  Level 5 is where the car is assumed to be at least as capable of driving itself as a human driver).

Then came the recommendations.

To DOT and NHTSA they said that the parameters for data collection needed to be set.  The upshot being that at the moment if they want the data from an AV, they need to get it from the manufacturer.  Most AVs have a form of black box log of what the vehicle was doing, but only the company that made it can access it.

To NHTSA in particular, they recommended guidelines on making the systems more restrictive on when they could be used.  “…incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed.”  So that rather than relying on each owner to know the parameters of their vehicle’s ability to operate, the vehicle will only operate when it’s supposed to.

Then to automakers and their representatives, they recommended robust driver engagement monitoring, the system safeguards mentioned to NHTSA, and industry-wide advocacy in those safeguards.

While Human-Computer Interaction standards have been evolving for decades, there is no industry-wide practice nor regulation saying how an AV system is supposed to interact with riders, and any situation in which the AV system is used in a context it’s not supposed to be – like using a driving system meant only for interstates on a highway with crossroads – it’s currently the driver’s responsibility to know the limitations of the vehicle.  How much of this knowledge burden is acceptable is still up for debate.  According to an internal study, 20% of questions asked on the Subaru help line are answered in the manual.  It’s a constant headache for engineers to make systems that are understandable and robust against abuse, and even moreso for safety critical systems like cars.

Even if the legislation giving them the authority to regulate came through tomorrow, it will take years to find the balance.  Too many messages are distracting and annoying.  Having the car refuse to go into autonomous mode could cause the very distraction that it’s trying to avoid.  Filling a car with cameras to make sure people are watching the road is a big privacy risk, and the NHTSA guidelines even acknowledge this.  On the flipside, not everyone is going to pay attention to the instructions.  People will drive badly even when they aren’t driving!  Until the AVs can take over completely, there will be risks to mitigate.  How we do that is going to be a very long and difficult negotiation both internally and between carmakers and regulators.  A fatality always brings attention, and at least in this case there was no direct contraction or expansion in progress on AVs.  Just some analysis suggesting a way forward while regulators wait for legislatures to catch up.

Join me next time for consumer blowback and the answer to a very important question: Have improved software and interfaces helped combat distracted Autopilot drivers?  If you’ve been watching the news lately, you know the answer already!

 

One thought on “Autonomous Car Accidents and Policy Implications – Part 3

  1. Pingback: The Autopilot Defense, AV Ethics, and Why Neither Work | RxEvolution - Portfolio of Paul Calhoun

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s