Autonomous Car Accidents and Policy Implications – Part 1

Autonomous vehicles are a hot item these days.  When I started graduate school, examples using autonomous cars were always the go-to thing in business coursework and in a lot of robotics courses.  It makes sense.  Up until very recently, robots were indoor and industrial, or else low autonomy pets.  Autonomy and the algorithms that make it possible just wasn’t ready for the hectic, difficult environments of field robotics (field robotics per the Field Robotic Center at Carnegie Mellon is “… the use of mobile robots in field environments such as work sites and natural terrain, where the robots must safeguard themselves while performing non-repetitive tasks and objective sensing as well as self-navigation in random or dynamic environments.”

Autonomous vehicles are by no means new to the robotics industry.  We’ve been putting them out there for the military for over a decade.  What changed in the last 5 years has been sensors, algorithms, and processing power.  It’s now feasible to fuse multiple sensors’ data into a single environmental view and then process that data to classify features and vectors and use algorithms to say ‘this is a stop sign’ or ‘this is a pedestrian.’

Now every startup and their brother wants in because we reckon every car will be autonomous in a few years and that’s a BIG market to capture.  Not just a big market, but a lot of work!  It may seem as if having tens of startups and twenty or thirty major companies all competing for the space would mean few winners and a lot of losers, but you only need to come up with one novel piece, one good solution to a single previously intractable problem.  Then you sell it on and make a lot of money from the big players.  This isn’t the dream, but it’s a good goal.  The dream, of course, is to find a general solution and beat the big players by having a system that works well and works all the time.

But I digress.

That’s a quick look at the business and tech side – and more on that as we continue – so let’s have a glance at policy.  To do so, it’s good to go through the examples of news items in the last year or two involving autonomous vehicles and how they’ve been misbehaving.  Good press is important, but bad press is what can turn a utopian dream of the car of the future into another Dymaxion.  I’ll be doing the examples in order of severity (which, funnily enough is almost chronological from earliest to latest news) and talk about what went wrong and where in the range from the unstoppable progress of an oil rig to the complete obliteration of an industry like the Hindenburg the items lie on.

Example 1: Google Car Fender Bender with a City Bus

In 2016, a public bus collided (video) with a self driving Google (before they renamed that unit Waymo) car.  As you can see from the bus dashcam, the bus was driving and the AV (autonomous vehicle) was hugging the righthand lane.  The car moves into the driving lane and the bus hits it.  Bus going at 15 mph and AV at 2 mph, causing minimal damage to the car.  This was hailed as a crash with “serious implications” in tech news, but the company and experts felt that it would barely hinder development at all.  Both views were ultimately right, and with good reasons.

The “implications” part was true.  It was the first time an AV could be considered to have caused an accident by direct action.  Up until then, most accidents with AVs were rear-end collisions caused by the AVs stopping at stoplights when the person behind was either not paying enough attention or expected the AV to keep going.  This is because AVs are trained to follow the law very carefully.  There’s a lot of fudge factor in how humans deal with stop signs and traffic lights, but an AV will always (excepting in the inevitable edge cases) come to a complete stop at all stop signs and stop on yellow at traffic lights.

How, then, did this car end up doing what it did?  Fudge factors baked in.  Google had just implemented a new feature.  They had noticed that when making a right turn, humans often pull to the right side of a driving lane to give as much room as possible to people going straight.  This prevents pileups at turns where there isn’t a turn lane.  All well and good so far, and something a lot of us do.  In this case, the AV moved over to the right in preparation, and then its sensors caught sight of an obstruction.  Sandbags around a storm drain at the corner.  The AV is now stopped because traffic is going around it.  There’s a gap; a bus is coming.  The car assumes it has right-of-way because it’s in front of the bus.  The driver – all AVs have to have a human in the loop since even the most optimistic company is full of engineers who know what level of autonomy we’re working on agreed.  There was no distracted humans in this case; the driver saw the bus and reckoned it would slow down to let the AV go around the sandbag.  The bus didn’t and hit the car.  Fault car, but in a very murky situation.

The policy implications and public outcry were minimal, possibly because Google admitted “some responsibility.”  Consumer Watchdog called for there to be a police report every time an AV got into an accident.  They wanted the technical data and videos to be released every time there was an accident, and regulations requiring human drivers behind the wheel.  All reasonable demands from a regulatory standpoint, and the industry would have been wise to accede.

To use this as a segue into a quick point before going on to example 2, this is an example of how a company does a good job in PR, but the industry falls down.  Regulation isn’t just about protecting citizens (although that’s the primary goal) from companies.  It’s about protecting the companies from their own mistakes.  Standards and Regulations are vitally important to build alongside technology since they give industry and engineers something to refer to as a precaution and in retrospect.  Take the request above: Release of data about crash and a driver behind the wheel.  Both good ideas which most companies voluntarily already do.  Problem: if it’s not required, what if a company doesn’t have a human driver to supervise and intercede during road testing?  The current technological capabilities make this seem highly unlikely, but it only takes one bad actor.

An axiom of regulation is that proactive regulation is always less onerous than reactive regulation.  When a chemical plant might have a spill, they can put in standards that increase reliability and decrease risk, and lobby for regulations that work with the standards to show that if something does happen, there’s a process and the company can say that they complied with this and that regulation and standard.  If there is none, then the public calls their representative and the government makes sure to put the strongest possible regulatory burden they can in place to make the public see that they’re doing something.  Contrast the fracking industry of New York which tried to fight regulation and is effectively banned statewide with that of California where they wrote their own regulation which was tough but not only insulated them by showing that they were complying with tough regulation but also reduced competition by making it more expensive to enter the market.  Buying all that safety gear is a barrier, but in this case a necessary one.

There are regulations in play – and some of the State regs are why Uber is testing in Arizona and Pennsylvania instead of California.  Without a coherent policy on a national level, however, the industry remains vulnerable.  The National Highway Traffic Safety Administration (NHTSA) has guidelines and technical help for States, but there is no backing in law for it.  Bills in the House and Senate are stalled and unlikely to pass soon.   Standards exist, but remain in the early phase.  As long as there are no comprehensive and/or binding rules, there is little protection should the public turn on the AV industry, and there is little that industry or regulators could do proactively if an AV company in a State with minimal regulation fields a dangerous vehicle.

Next post, I’ll go on to example 2 – the first Tesla fatality, and onwards to further examples and a discussion of what regulations are currently in place, regulations being proposed, and how they all are affected by technical progress and capabilities.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s