The Autopilot Defense, AV Ethics, and Why Neither Work

A couple of days ago, an article from Elektrek appeared in my feed about a man from the Netherlands who tried to argue that he shouldn’t have been issued two tickets because it was his Tesla Autopilot that was driving at the time. The judge ruled that the level of supervision required to use a Tesla in Autopilot mode meant that the driver was responsible for the car’s behavior. Much like how the fatalities in the US (some of which have been covered here) are deemed driver error, so are traffic violations. This is good for Tesla’s liability, and if the stories get enough coverage, maybe the threat of being ticketed for something the Autopilot system did will encourage some more responsible behavior.

Also in the last month or two I’ve been hearing more about AV ethics. The Moral Machine project at MIT set up a series of scenarios and the tools to make new scenarios, and millions of people judged what the AV should do. The premise was similar to the Trolley Problem.  The AV detects its brakes are gone. It can sense the number and type of passengers as well as what is immediately ahead of it (barriers, people, animals). The people in the car or on the road can be old, young, rich, poor, fit, male, or female in various combinations. There are two lanes and the question is whether to swerve into the other lane.

An illustrative example from the Moral Machine

They then compared attitudes across multiple behavioral ideas between the countries they got enough responses from to be significant.

The differences between the US and China with the worldwide average

The ethics problem is an important one, and it’s good that we are looking at it now so that when we’re ready to implement that kind of judgment, we’ll know what is socially acceptable behavior in an emergency, and how that behavior should be adjusted based on locality. However, the publicity this has garnered (it was even a segment on this season’s QI) has had the effect of perpetuating the belief that AVs are more advanced than they are. People see these studies and assume that if it’s being studied, the cars must be in a state where these decisions can be made. It’s a natural assumption. The cars can see, therefore the cars can see what is there.

My contacts in autonomous vehicle design and my knowledge of computer vision tell me otherwise. The only human that the AV is absolutely sure exists in the one inside the car. Everything else is a confidence figure based on an imperfect feature matching classifier (or some other variant of computer vision algorithm).

Let’s have a look at some footage from the output of a Tesla Autopilot. I use Tesla because it’s out there now, prevalent, and there’s a decent amount of video that can be used for examples. Any AV will look similar but not the same, especially if it has a LIDAR array (which Tesla doesn’t) which adds a point cloud to the camera and radar returns. Disclaimer: I am applying general knowledge of robotics and autonomy to a limited data source. I know nothing of the specifics of Tesla’s internal logic, and my views are inferences based on knowledge and observation that I put together to give you the most likely explanation.

Let’s start with a clip from a video taken on the streets of Paris. We can start anywhere because the big thing I want to talk about is everywhere in the video.

The boxes. Color coded red for vehicles, yellow pedestrians, green bikes, and so on. First word is what it is, then the estimated distance. Then below that in parenthesis is the kicker: the confidence factor. For vehicles the factor is usually high or becomes high very quickly. Pedestrians, not so much. Their boxes fluctuate more, and often the vehicle identifies distant inanimate objects as pedestrians – which is fine because it’s cautious and at that distance probably not a big deal at street speeds. However, real pedestrians often have fluctuating boxes that come and go, and their confidence factors can start out very low and never go up. Also, try 1:47 where a piece of building has a pedestrian confidence rating of 100%. Caution is good, but that’s a very high confidence for something completely wrong at 55 meters.

Real pedestrians are often ID’d and never go above 50% when the car is much closer. Starting at 10:05, there’s a group that crosses right in front of the car and it takes over a second for their boxes to be consistently yellow and confidence factors above 50%. Would you want your car to run into a barrier because it was 30% sure there were between 2 and 5 people ahead of it, and that 1-3 of them might be a motorcycle?

I don’t say this to condemn Tesla, I say it to make it clear that the ethics problem cannot be considered real-world one right now. The premise requires a degree of technical sophistication that is not yet there. We may have it in the near future, but I strongly doubt that even vehicles as advanced as the Waymo cars can maintain a consistent view with high confidence factors – and they have LIDAR arrays to add another sensor layer.  The addition of thermal cameras would likely help this even more, though at the cost of slowing the algorithm which has to fuse yet another sensor type.  Surety is pointless if the car can’t “think” in real time.

Based on this degree of uncertainty in what the vehicle can perceive, the current priority that I heard murmured regretfully in the halls of the Robotics Institute is this: do everything to save the passenger. When given a choice where only one option may harm a passenger, the AV will always choose the other option. Therefore in the barrier case above, the AV of today doesn’t care if there is one or five people in the car or if the entire other lane is filled with babies. It will swerve. It will do so because it knows it will save a human life that way.

It may sound bleak, heartless, even immoral, but it’s sound reasoning.  Having an AV kill a pedestrian to save a passenger may be a tragedy, but it’s worse if it kills a passenger to save a bundle of tumbleweed that climbs just over the confidence threshold during the fraction of a second it is making its decision in.

 

 

Bonus footage: Here’s some Tesla view from highways. I think it’s fun to see how the gore problem (getting trapped in crosshatched zones which can eventually result in hitting a median) has been coming along.

Also, something fun to look out for in all of these videos is the label on the bottom half of the confidence box. The top as I noted before is ID, type, and confidence rating. The bottom is what lane it believes the object to be in relative to the car, it’s speed (also relative to the car, that’s why so many are negative; that means the car is catching up to them), and the radar return from them. The radar return is the bottom text, which says either ‘moving’, ‘stopped’ or ‘stationary’ or ‘no rad sig.’ Looks like it’s using Doppler shift from the radar arrays to figure out the motion of objects nearby. Interesting that there’s a ‘stopped’ and ‘stationary’ rating. That sounds like they’ve got some prediction going where the Autopilot is looking more closely at objects it believe will move at some point ‘stopped’ rather than ones which will not move at all ‘stationary.’

The first video was taken around September and I think is part of the same Paris series. Line colors appear to be red = can cross line, purple = don’t cross line, yellow = unsure. Let’s see how it’s doing.

I started at 3:52 and in that instance, the gore is almost perfectly captured. There is a very brief period of red/yellow where the car might merge into it, though without access to the algorithms, I couldn’t say whether it would do it or not. There’s another, smaller, gore at 4:07 that is equally well handled, and again at 7:52. There are a few more as the video goes on, but the most important is 13:14, where the car is briefly headed directly at a gore. It labels it purple but crosses anyway.

I’m going out on a limb here and saying that the merge was at the behest of the driver, and to be fair it isn’t on the purple for long. That’s the danger zone if someone looks away and lets the AP handle it, though. It might keep merging right, but also might refuse to cross the purple, stay in the gore, and hit the median. To be honest, I might have considered that a very unlikely edge case except for the incidents I’ve previously covered, one of which was fatal.

The second, captured and uploaded sometime in the last month. Watch the center screen. It looks like there are different color codes here. Green = can cross line, red = don’t cross line, yellow = unsure.

The gore ( at 3:27 if it doesn’t start there for you), is yellow at first, but turns red just after the car gets to it but before the crosshatch is reached.

There’s been a definite improvement. In fact, it might be the case that AP has overcome the gore problem completely. On the other hand, all of those lanes were very clearly marked. Apparently Paris has found more resilient paint than Silicon Valley. The most recent video I can find of a car aiming itself at a median is footage I’ve used before, which was from April. Promising. I hope they’ve figured it out.

One thought on “The Autopilot Defense, AV Ethics, and Why Neither Work

  1. This was very helpful. The visuals and the descriptions clarified how “autonomous” really works currently. Perhaps new buyers should have to watch a few of these before taking their new cars out on the road!

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s