Robots and Boundaries Part 3: Abuse

In the prior two sections, we looked at how robots transgress human boundaries. Today, how about turning that around? It’s a mainstay in human robot interaction (HRI) that robots get bullied. Sometimes it’s a humorous way of testing a robot’s capabilities like with Boston Dynamic’s designated ‘robot bully.’ Sometimes its out in the field as with the famous case of the mall patrol robot which was deployed for an HRI paper on how people treat robots who are acting autonomously. That paper spawned further algorithmic work on finding paths which would avoid bullies. Further studies were conducted on bullying robots, which suggested that robots were bullied because they were human-like or perceived to be affected by bullying. At the risk of rehashing the end of the previous part, this raises the question of whether a robot can be considered to be abused and what the cutoff is for causing physical harm to an entity. We condemn animal abuse because they can feel pain, but like simulated intelligence, when are simulated feelings effectively the same as organic ones?

ATR Intelligent Robotics and Communication Laboratories 


For the time being, the better policy question is: should it be – or is it already – an offense to stop a robot from doing their job? Let’s take a delivery crawler, one of those little critters that have taken to roaming the streets of San Francisco bringing people pizzas, or those upright bins that trundle down the hallways of particularly swanky hotels carrying room service and extra pillows. What if someone won’t let the delivery bot go by? Eventually the food will get cold and the robot will be missed – presuming it has no way to alert its owner that something has gone wrong in the interim. I think it can be taken as assumed that stealing from a delivery crawler is theft from whoever bought the contents just like stealing from a delivery truck, so we can skip that. Besides, the only speculation about theft so far has been by media outlets and the people that made the robots. There’s been no known cases of a delivery crawler being robbed by an actual malefactor. In all these cases with the delivery bot, interfering with the robot materially injures both the buyer of the food and the owner of the robot who has probably lost business because the robot was late or the food was missing.

Robots deliver fun with hotel room service orders, and they don't expect a tip

Delivery robot in a hotel (Mel Melcon / Los Angeles Times).  Interesting sidenote, there is a documented case of one of these being delayed by accident when it gets knocked sideways by a passing guest and gets confused.

That’s too obvious. What about slowing it down? Just delay it two minutes. Or to take the example of a different type of robot, what about stopping an assembly robot for a couple minutes, or getting in the way of a robot that patrols building lobbies offering to help visitors find a specific room? I suppose it comes down – for now – to whether the decrease in efficiency can be proven to have cost something. Eventually harassing a robot will probably be some kind of crime given how many will be out in the world doing useful tasks, but for now – especially if the robot is traversing a public space – I’m not aware of whether anything can or will be done to curb such behaviors. For now it’s all down to property damage or some provable loss.

Jump ahead a few years. Delivery crawlers are relatively commonplace, and have been programmed to understand that their cargo is usually less valuable than they are. Thus if they are threatened with the possibility of theft, they will release the hatch rather than be harmed by attempts at forced entry. This adds a shade to the theft concept, I think. At this point the robots may start being taught how much damage they can take from various implements, and they’re more likely to surrender to someone with a weapon. At this point, can we distinguish between armed and unarmed robbery of an autonomous machine? Before, it was clear cut. There is more harm done if the robot was damaged in the theft. Now the threat of violence is weighed into the robot’s behavior, and so that threat may have to be considered as part of the crime. This can loop us back to part 1 of this series. In some municipalities, it’s legal to use force to defend and retrieve property. I will be very interested to see the first case of someone claiming that this right extends to delivery robots, and arms their robots so they can defend themselves against detected attempts at theft.

This series has been a bit more pie in the sky than my usual. Thinking 10+ years ahead rather than 2-3. In my next piece, I’ll get back to the present and near future with a look at delivery crawlers and how they’re being treated by a present day government.

Robots and Boundaries Part 2: Theft

In the last section of this series, I looked at security robots that might be allowed to physically restrain and assault humans. This crossed the boundary of personal space and personal safety in the name of enforcing the rules of a facility. Today, I’ll look at a different boundary: property. It seems a given that robots shouldn’t commit theft, and yet there are many cases in which they might. Whether by malicious use, poor programming, or misunderstood conditions, a robot could steal anything from fruit to itself – assuming a robot can steal itself.

The situation obvious both for its inevitability and the result is someone tells a robot ‘fetch that’ and indicates something that doesn’t belong to them. That’s not much different from using a long pole to snag the same item and will no doubt be treated as such legally – though of course the news will glom on because of the novelty, much as if someone had trained their dog to shoplift. Upping the stakes, let’s say the robot in question is built along the same lines as an autonomous security robot. We saw that – in simulation – one of the functions of these robots is to confiscate dangerous or stolen property. Let’s say someone tells this robot ‘that object is stolen, retrieve it’ when in fact the object is the lawful property of the person who has it and the person commanding the robot wishes to steal it. The robot will at first demand the object and when it isn’t forthcoming, escalate per its table of prescribed behaviors until it uses force. Now, it may be predictable that this occurs, so the robot’s operator is still responsible for everything it does. I think it can be assumed that any unlawful behavior a robot is ordered to commit will be charged to whoever gives it the command, though the manufacturer will no doubt also take a lot of heat until they find a way to make the robot able to distinguish between a lawful and unlawful order.

Moving along to inadvertent theft, then. An agricultural robot is picking fruit. It fails to mark the boundary between one farmer’s field and another, and it picks the neighboring farm’s fruit. The owner is puzzled at how much the robot brings back, but doesn’t connect the dots until after the fruit has gone to market and the neighbor calls the police when they find all their fruit missing. If it’s a particularly intelligent robot (in some ways), it goes as far as to harvest two different kinds of produce, which would alert its owner much sooner. When they either confess or are confronted, they refuse to pay damages because ‘the robot did it.’ A business is responsible for the actions of their employee or equipment while they’re doing their job, so the farmer is forced to pay. However, the farmer turns around and sues the company that made the robot, alleging that the robot was defective. My guess is that the farmer would win that. So far it’s all civil, but the thing is that the robot did commit a crime as well. An employer may be financially on the hook, but they can’t be criminally charged unless they ordered the employee to commit the crime, which didn’t happen in this case. A pet owner can be held liable for the actions of their animal, but this is a purposeful thinking machine. Much like in the case of the guard robot committing assault, there’s no clear answer here. It was a mistake/malfunction, but ignorance of the law is often dismissed as a mitigating factor.

When can a robot be considered responsible for its own actions?

Up until now, machines are seen to malfunction, be misused, or be defective. The maker of the machine is sometimes at fault, and sometimes the owner. We’re swiftly reaching a point, however, where robots will have a form of intelligence which is opaque to us in much the same way our own intelligence is. We will be able to make educated inferences based on behavior, we’ll run tests, but we won’t look into the robot’s mind the way we can other programmed machines of today. Papers are being written on how to reverse engineer neural networks, and for the most part we can still say ‘this algorithm is doing this with this training set.’ It’s going to get harder as AIs become more complex and eventually we’ll reach a point where our AI systems become irreversibly black-box. There are already articles out there that suggest that some of the algorithms that live entirely in software have already reached this point, citing situations like the algorithms that perform high frequency trading. If all we can do is identify a motivation and are unable to map out the programmatic sequence that led to a robot taking an action, then the robot may as well be alive.

I’m drifting somewhat from the topic, but that’s what any personal, legal, or moral boundary transgression comes down to. Was it intentional and can the transgressor be considered able to understand the concept of transgression? A cat understands that we don’t want them to knock the glass off the table, but they do it anyway. The concept of negative reinforcement to a robot is a fascinating one and something I may revisit in another article on the crossing of a different boundary – personhood.

Bonus example: Let’s say your autonomous car drives off without you and crosses State lines with your recently purchased liquor in the trunk – which is illegal if it goes to, for example, Pennsylvania. Did your car steal itself and the booze, and did it transport alcohol without a permit or are you liable for the misbehavior of your vehicle?

Science fiction concept: Picture if you will a fully automated still where robots gather the ingredients from a small orchard and vegetable garden, put them in the fermenter, and gather up the bottles. Maybe the entire concern is extremely green and is powered using byproducts of the fermentation so there is no artificial power source – just sunlight and rain. Prohibition is reinstated and the still is shut down. A new owner takes over and at some point the system reboots without them knowing about it. Is it bootlegging if the liquor literally makes itself?

Robots and Boundaries Part 1: Security

Most of my posts up until now have been about autonomous vehicles (AVs) because they have been at the front of the news, and thus in the minds of regulators. However, there are a variety of other autonomous systems operating out there, and some of them are doing things that will almost certainly require more oversight than they currently have.

I’m going to focus in this series of articles on how a robot can cross a legal or ethical boundary, whether by accident or by design. Let’s start with a type of robot that was in the news last December – security robots.

Last December, the news broke that a Knightscope security robot was being rented and used by the SPCA in San Francisco to deter homeless people from settling in the area. Knightscope disputed the allegation, claiming that their robot was only being used to help secure the SPCA employees and environment. This didn’t prevent the SF government from threatening a $1000 per day fine if the robot continued to operate on the sidewalk without a permit, nor did it stop headlines like “Robots Are Being Used Against San Francisco’s Homeless.” The vox pop comments on the issue were very strongly against the use of a robot to chase away homeless people even if it helped the employees feel safer and lowered the rate of car break-ins and vandalism. With the backlash growing, and the robot being attacked by people on the street, the SPCA decided to return their K5 unit.

More thoughtful articles about the ethics of surveillance and security robots followed in the wake of the events. Their points are well taken. The Knightscope robot was unarmed and unable to take positive action. It wasn’t even allowed to get closer than a few feet without triggering proximity alerts which made it try to avoid collision. So then, why was it such a controversial tool if all it did was trundle around sending video back to the security people in SPCA?

Well, for one thing it had recognition algorithms. The main use case presented by Knightscope is that it’s supposed to be used to detect whether cars have been on the property for too long, or to compare against a list of people either allowed or barred from being on the property being patrolled. This means that it was ‘remembering’ who it saw wherever it went, creating a database of individuals doing business with, passing by, or otherwise using the space around the SPCA building. Creating a visual library of who goes where is always a tricky thing to do, especially when the SPCA said they were sharing their data with the police department – albeit only when a crime was committed.

Second, it was a symbol of surveillance. Similar to having a marked car patrolling the street or a camera on every corner, these robots were flat-out showing that everyone and everything near the building was being watched. It wasn’t necessarily intimidating just because it was a robot – although its size and shape have been compared to a smartphone crossed with a Dalek – but instead as a reminder that someone was watching. Not just watching but recording and taking notes.


Image Courtesy of Knightscope Gallery

Ultimately, this situation was less about the robot and more about the people. Example: The thing about some of the algorithms onboard the robot is that they’re also supposed to be used to ‘predict’ when crimes will occur. There’s been a wealth of studies and literature about how criminal justice AIs are biased. The bias is because they have to be trained by humans or use human data and so they form the same biases the humans do. Thus a criminal justice AI will be about as accurate as a human doing the same work because the conclusions programmed into it by humans as a basis for its decision making lead it to come to similar conclusions in the future. This has been seen in the use of AIs to predict recidivism in recently released prisoners. If you tell the AI to prevent recidivism, you increase the number of wrong decisions because it will bias in favor of recommending innocent people who have been convicted in the past be placed on stricter parole. Tell it to decrease false positives and you’ll get more people slip through the net and re-offend. These robots and AIs, then, just expand an existing system flaws and all.

What the examples show us is that people don’t trust robots to enforce the law and provide security in public spaces. It makes them feel watched, untrusted, and afraid of what the robot might do. The flipside is that the robots patrolling private spaces seem to be better regarded. Possibly because they expect more surveillance indoors and possibly because the robots seem to be under control if they’re not roaming the streets. One Knightscope robot that fell in a fountain even got a memorial put up by the people who worked nearby.

Let’s take this a step further. Robots that actively enforce rather than watch and catalog. A study is ongoing which puts people in situations where an armed robot asks them to hand over an object like a weapon in order to help maintain peace in a simulated environment. The study so far has shown that the more human-like the robot, the more likely someone will comply with the robot, which suggests to me that people don’t trust robots that remind them that they’re robots.

Looking at an existing robot, there’s the Chinese AnBot, a robot similar to those fielded by Knightscope. It has the ability to identify fugitives, decide where to patrol using data gathered during prior patrols, a button people can press to signal an emergency, and is armed. Its weapon is an electroshock probe used in riot control and currently requires a human in the loop to authorize its use. Still, it’s only a minor software change away from being able to enforce laws autonomously.

So far the security robots haven’t crossed any major boundaries, and when they crossed the minor boundary of being too intrusive they were sharply curtailed.  They’ve acted as extensions of a human security guard and fulfilled the same function that a multiplicity of highly visible cameras could.  One that’s armed and programmed to use force when its internal algorithm predicts it needs to crosses several. Ignoring the ethics of a ‘justice machine’, there’s the big question of legality. In many places, security guards are required to pass certifications and be licensed to use force – and strictly on the premises they’ve been hired to guard.  Restricting ourselves to municipalities where security guards are authorized to use force – since if a human can’t, we can assume a machine can’t either – it’s still a big question. The first robot that physically assaults someone with the intent of upholding civil security will face immense scrutiny. What does it count as?

I should note that I’m relying mostly on US case law because it’s what I’m most familiar with.  Countries with significantly different cultures may have different mores and legal structures.  Still, I think that we can consider the USA to be middle of the road or even somewhat favorable when it comes to methods by which private property is defended.

The Supreme Court of California has ruled against a ‘a device that is without discretion’ in home defense.  A section of their ruling is

“Allowing persons, at their own risk, to employ deadly mechanical devices imperils the lives of children, firemen and policemen acting within the scope of their employment, and others. Where the actor is present, there is always the possibility he will realize that deadly force is not necessary, but deadly mechanical devices are without mercy or discretion. Such devices ‘are silent instrumentalities of death. They deal death and destruction to the innocent as well as the criminal intruder without the slightest warning. The taking of human life (or infliction of great bodily injury) by such means is brutally savage and inhuman.”

But these robots do have discretion.  Their algorithms are sophisticated enough to tell one human from another and to know who is a visitor and who has unrestricted access to a property.  So an autonomous guard is not a booby trap.

Is the robot equivalent to a guard dog?  The robot may use force, it can distinguish between individuals, though arguably it’s better at recognizing specific people and acting predictably in repeated scenarios.  In the UK, a guard dog must be tied up or under control of a handler and the US has a different law in every State.  The general expectation, however, is that dogs can attack trespassers but the trespasser may be able to recover damages depending on the municipality and whether there were signs posted that the dog was there.  I don’t think a robot is analogous enough to a dog since a robot is capable of clearer communication and more predictable behavior.

So the answer is what you might expect: a robot is a robot.  It might sometimes seem like a booby trap since it’s a mechanical contrivance and sometimes like a guard dog because it roams around and can take independent action.  It’s something unique, however, because of the form its intelligence takes.  It certainly isn’t a human security guard because it lacks flexible discretion.  It does resemble all three, however, in that it only operates in a set space and (presumably) cannot exercise its authority outside of the premises it guards.  Even without specific laws regarding security robots, the case of the SPCA robot clearly shows that governments won’t allow robots to leave their designated zones.

A situation: Someone is in a place of business and is looking at a sign on the wall. They’re holding a water bottle in one hand and feeling a cramp they stretch their arm. A security robot sees this and correlates the action and object with previous situations in which vandals spray painted the wall. It doesn’t contextualize, just sees a human with a cylindrical object reaching up while looking at a wall. It goes over and tells the human to move along. The human is confused and says they’re there for legitimate business. The robot is no longer paying attention to the object, just the human not following its command. A human security guard would be contextually aware even while approaching and would have realized by now that the human visitor wasn’t doing anything wrong. The robot follows a pre-planned script and threatens force. The human might leave, angry and intending to complain or withdraw their business. Or they might stand fast and the robot – still following the script – threatens force and then uses it, pushing the human or using some other means of restraint or forced removal.

It’s a simplified case, but a possible one. The robot would only be looking for behaviors that correlate with criminal activity. In this case it notes pose, object in hand, location, and that the human is not on its list of permanent building occupants. The robot wrongly concludes criminal intent and goes through a pre-planned list of actions based on the human’s responses. Who, then, is liable? The manufacturer, the person who trained the robot, the owner, or someone else? The robot has assaulted someone without reasonable cause. It’s crossed a legal and social boundary based on a mis-classification in its algorithm-based decision tree. A human security guard would be fired or disciplined, and the company might have to pay damages. Would the robot face the same or just be reprogrammed? A robot cannot be penalized the same way a human can. In the case of a human guard, punishment is shared between the guard and company. Is the guard’s portion ignored or does the company face greater punishment since the offender is company owned equipment?

It is likely that many governments will simply ban security robots from being allowed to use force as soon as the issue comes up. There’s little demonstrated need, and legislation of what constitutes a situation in which force is allowed looks mechanistic on paper but often relies on human ‘sense’ to prevail. A robot would almost certainly be programmed to do precisely what is legal, and that often fails to be humane. Additionally, while a robot might technically be ‘well trained’ from a certification standpoint (and once one robot can demonstrate all the training material, they all can), and even if they could demonstrate that they would only use force when appropriate, that probably won’t be enough. What we saw in the above examples made it clear that robots in the real world aren’t trusted to patrol and deter crime. Governments won’t be willing to allow armed robots in civil situations if there’s no demonstrated need and a high chance of public outcry. Unlike AVs, the public doesn’t want a robot that tells them what to do and where they can go.

In the public consciousness, robots exist to serve. When they tell us what to do, we get annoyed and when they use force to make us comply then we’re angry and afraid. Our stories are filled with examples of the creation destroying or enslaving its creator, and that’s what most people will think of when they see an automaton enforcing the law. They’ll feel as if they are being dominated by the robot even if that robot is doing exactly what another human has instructed them to. Until people are used to seeing passive security robots, they’re unlikely to be willing to accept active ones. Even then, the boundary being crossed – that of a non-human having physical authority over a human – may engender resentment even after we’ve gotten used to seeing the surveillance bots patrolling our buildings and streets.

Autonomous Car Accidents and Policy Implications – Part 5.4

In the last three sections of this article, we followed the events of the first autonomous vehicle fatality involving a pedestrian.  We covered how an autonomous Uber struck and killed a pedestrian in Tempe, Arizona, and then how the AV industry decided to join the voices condemning Uber rather than go to their aid.  In the third section, both the governor of Arizona and Uber themselves were attacked repeatedly and their relationships exposed as close, to say the least, with preferential treatment seeming to be the standard practice for Governor Ducey.  We end here with the advent of the preliminary report which I used to lay out the events of the night, and how Uber turned the situation around to the point where they are now back to testing on public roads – albeit with fewer cars and drivers than before.


The preliminary report is where the driver’s culpability started to surface again in media reports.  At first, it was just a hint buried in the report that the local media outlets – always on the lookout for more details about the Tempe crash – found and ran with.  The report’s suggestion wasn’t anything particularly new, just the first official confirmation that the AV driver was responsible for braking – and thus responsible for anything that happened if the car didn’t brake – in an emergency.  It was the first hint, and the beginning of a turn in the investigation that has given Uber a slim chance of salvaging the tragedy and going on with development.

It was the Tempe police department that uncovered that the driver had been streaming Hulu on her phone at the approximate time of collision.  This discovery completely changed the interpretation of what the internal camera had captured.  Instead of looking at the AV interface as the driver had claimed at the time, it appeared that she was watching a show on her phone.  All of a sudden the blame and media attention focused squarely on the driver, especially since the investigations suggested that had the driver braked when the pedestrian became visible, the pedestrian would have made it to the other side of the street before the car could hit her.

Uber grabbed the opportunity.  They stressed the training where operators of their AVs spend three weeks on a closed course learning how to safely operate the car, and also their strict policy of firing anyone found to be using their phone while driving the car.  This was their opportunity to improve their training and safety culture while placing blame on an individual, and thus being able to take action without admitting being at fault.

Slight problem: the internal camera.  One of Uber’s main ways of determining whether a driver was distracted was random spot checks of the internal camera to see what the driver was doing during a test.  In this crash, the internal camera was not only reviewed by Uber, but by hundreds of media outlets, their readers, and authorities.  At no point before the Hulu stream records were obtained was the driver accused of using her phone.  If one driver could make it look like they were monitoring the AV interface when in actuality watching a video on their phone, then how many other drivers were on the road doing exactly the same thing?  Uber wasn’t out of the woods yet, and they evidently decided that they needed to take more drastic action than a safety review.

Less than a month after the Hulu records were published, Uber fired almost all of its AV operators.  Those drivers were put at the head of the line to re-apply for the job of ‘mission specialist‘ – although there were only 55 mission specialist positions open as opposed to the 100+ AV drivers who’d just been laid off.  The plan appeared to be to scale back testing to the closed tracks again, with the mission specialists being better trained and more technically focused than the drivers.  The change would mean focusing more on testing with specific circumstances and issues rather than sending the AV out into the wild to find problems.  The mission specialists would understand their vehicles a lot better, and the cars themselves would be intensively tested in controlled conditions before a limited revival of testing on public roads.

The clock was ticking, though.  Waymo and GM plan on having their AVs on the road and providing taxi services later this year and in 2019 respectively, while Ford is aiming for a 2021 rollout.  Uber was first to market, but the setback here was threatening to lose them that advantage.  Waymo has the backing of Alphabet and GM is a well known and trusted brand.  If they hit the road and made a big success before Uber had captured market share, Uber might get pushed out completely.  Ever the company best known for fast and decisive action, Uber had its mission specialists on the roads of Pittsburgh less than two weeks after the restructure.  The closed course shakedown appeared successful; the AVs once again had collision avoidance and emergency braking controlled by the computer, and new driver attentiveness monitoring was reportedly integrated with existing systems.  They’d also decided that it was time to focus their own attention.  They shut down their autonomous truck division, likely shifting all technical work in that group to their cars.  As of now, Uber appears to be determined to regain its competitive advantage of being first on the ground and first into people’s hands.  The public awareness of the Tempe fatality is fading from the public’s interest, which is good for Uber’s executives, given all that they have to deal with.  Just in the last couple weeks, they’ve been facing regulation in New York City and in Spain which would cap the number of cars they’re allowed to have on the streets.  The changes to their operations resulting from being the first AV company with a pedestrian death attributed to their system are likely to continue.

For now, it appears as if the worst thing the industry as a whole faces is federal gridlock and no surety as to what the Department of Transportation will eventually settle on as regulatory status quo nationwide.  That could easily change as more companies field much larger fleets with the rollout of AV taxis across the country.  One death is a tragedy and a fluke; two or three could swing the pendulum the other way, and given how favorable regulations have been so far, the pendulum could sweep a lot of companies off the board completely.  A recent survey shows declining trust for AVs among consumers,  This suggests that the negative news surrounding AVs has had an effect.  The industry will have to work hard to regain that lost ground, and if Uber wants to be first to market, they’ll have the responsibility of working hardest at proving that their technology is beneficial and dependable.  Otherwise companies with better reputations like Waymo or more trust like GM will take away that market share, and Uber’s strategy of being first will have been what ends their ambitions of being at the forefront of public AV use.  If consumers aren’t convinced that AVs are necessary and safe, then regulators may decide to tighten restrictions on them rather than face the same kind of backlash governor Ducey did.

Autonomous Car Accidents and Policy Implications – Part 5.3

In the last two sections of this article, we saw how an autonomous Uber struck and killed a pedestrian in Tempe, Arizona, and then how the AV industry decided to join the voices condemning Uber rather than go to their aid.  Technology does not develop in a vacuum, however, and as the days passed, the governor of Arizona began to get hit almost as badly as Uber.


By the 26th, 8 days after the crash and a week after press coverage began, the staunchly pro-industry governor of Arizona, Doug Ducey, had taken all the beating he could stand. He suspended Uber’s permission to test their AVs in his State – a somewhat empty gesture since they’d already halted testing everywhere. The message was that he was looking out for Arizona’s citizens, but investigative reporting suggested that he was bracing for the inevitable revelations about his relationship with Uber. Barely two days after the executive order, The Guardian broke the story and got the evidence, and the news spread to major outlets.

Doug Ducey had been very friendly to Uber during their time in his State. A month after his swearing in in 2014, Governor Ducey met with members of Uber’s leadership and in April of 2015 he and the State legislature legalized ride-sharing throughout the State, overriding any city or town regulations. During the legalization ceremony, Uber even got the schedule changed so that one of their drivers introduced the governor instead of the other way around, though they failed to convince him to wear one of their shirts for the event. In June, Ducey had signed an executive order (with help from Uber) making it legal for them to test their AVs on public roads as long as they had a driver, and on university campuses without a driver. The EO also provided for an oversight committee. Uber wanted to have a representative, but didn’t get that. However, the committee was made up of 8 members all appointed by Ducey and only one of whom was an expert – a professor of systems engineering at the University of Arizona. The committee met no more than twice, and never took any noticeable actions.

In 2016, the governor’s office reportedly leaned on the Phoenix city council to allow Uber to pick up riders at Sky Harbor airport. The fact that Uber sent out a tweet thanking the governor for something that the Phoenix city council did when the plan was implemented suggests some truth to this. In addition, the following month Ducey himself tweeted a commercial written for him that praised Uber Eats and provided a link to their service. Uber then began testing its AVs in Arizona’s public streets – something it hadn’t actually begun until then – but chose to maintain publicity only for Pittsburgh, even going so far as to ask that someone ‘discreet’ be informed in the Arizona highway patrol. There was no public reporting requirement in Arizona law, so Uber was able to do its AV testing without informing anyone, and chose only to tell the governor’s office and have them decide who to pass the message to. They also offered office space to Ducey’s deputy chief of staff when the deputy chief visited California that same month.  The Arizona government disputed the Guardian’s assertion that they hadn’t informed anyone, pointing out that an article in August of 2016 in the Capital Times which stated that both Waymo and Uber were testing on Arizona roads. It is true that it’s hard to ‘keep it quiet’ if it’s in a newspaper, but it’s also true that the governor apparently made little of the 2016 tests in public statements.

December 2016 was when the California DMV made it clear that Uber was not welcome after it tested in San Francisco without a permit. When it chose to move all the AVs it had been testing in CA to AZ, Ducey issued a glowing public statement. He followed this up with a January state of the State which called CA’s regulations ‘nutty’ and promised to repeal even more regulations to make his State friendlier still to AVs. No mention was made that Uber had been testing in AZ for months prior to its CA fleet being added to the existing AZ fleet.

All of this was good news for Democrats in Arizona, and bad news for Uber. Little that they did was more than the type of behavior one would expect out of a high powered lobbyist, but it was fodder for opponents and competitors in the wake of this high profile death. One of the the core principles of influencing public policy is that the more you pull the pendulum your way, the further it will swing against you if things go wrong. It’s always important to plan your fallback positions to be at least as strong as your main thrust of policy influence, and Uber’s fallbacks were proving ineffective. Especially since Ducey seemed willing to throw Uber under the bus, but Uber had not appeared to plan out a counterthrust should their political allies turn on them. Speaking purely from my own opinion now, I’d say they ought to have had a policy document in their back pocket which would have provided tough regulations and a panopticon of oversight ready to be proposed the moment something like this happened.

The coverage also resulted in the Phoenix New Times – an alternative newspaper which has several strong editorial opinions on Arizona’s politics – to abandon its fairly balanced coverage in favor of a piece slamming the governor. Leading with what appears to be a picture of a car with a goblin head on it, the article harshly criticizes the governor, and some of their zeal spills over onto the AV industry. They point out that the accident rate of AVs is currently higher than that of conventionally piloted vehicles, somewhat disingenuously. It is true that – as they quote from a professor in AZ – conventional cars go 100 million miles annually per fatality versus AVs current rate of 10 million. It is also true that the number in 1925 was approximately 18 per hundred million VMT, almost 20 years after the Ford Model T was released. AVs may not be doing as well as we are now, but we took a long time to reach that 1 per 100 million. We didn’t even get to the 10 per 100 million until the 40s.

It’s fair to say that Uber got hammered far worse than Ducey in the overall press coverage. As the revelations about his communications with Uber came out, so did another round of articles about issues with the AVs and their drivers and even an attempt to settle with the victim’s family backfired. The previously mentioned New York Times piece had been the rumbles, and now the avalanche had arrived. Reports came out that drivers had felt deep misgivings about the technology and the testing routines of the company. While it’s true that in a company the size of Uber, it’s not hard to find someone with a negative opinion about something after a disaster occurs, the drivers’ allegations were very troubling, though in hindsight make perfect sense.

It was no secret that Uber had mostly swapped from taking fares to just trying to get as many miles on the systems as possible. It was a good publicity event when they had a driver and a technician on board to also pick up passengers, but once they went over to one driver they also decided that gathering data for the algorithm was the highest priority. To keep up public interests, some of the AVs still stopped for passengers, but most were out sucking up the data for the software. Hence why the driver in the crash was on their second loop around a predefined route. The predictability made the machines’ jobs easier, but added to the boredom factor for the drivers. They spent hours monitoring the system, alone and with little stimulation. Being on predefined routes, there wasn’t much new to see, and they had to remain alert in case the system encountered something it couldn’t deal with. As we saw earlier, visual and auditory warnings were lacking in several areas, so they had to watch the road. It’s one thing to do this with someone in the car to talk to, or through unfamiliar roads, but they had neither. The drivers say they were encouraged to get in x number of miles per day, though Uber contends there was no quota and an expectation that the drivers would take breaks.

Many articles also made mention of Uber’s difficulties with their AV algorithm. They weren’t going nearly as many miles per intervention as Waymo. Waymo’s engineers have gone on record to say that the distance an AV goes before a driver has to intervene is an unreliable metric for how well the AV works and tends to reflect how much the algorithm is being pushed than how good it is. That is to say, the lower the number, the more aggressive the development speed of the algorithm.

That said, they were also struggling with false positives causing frequent and uncomfortable braking and swerving. It makes sense as the reason why the emergency brake system had been disabled and the algorithm took so long to classify the woman with her bike. Uber was testing with a high threshold of confidence on their classifier and prioritizing a smooth ride during these tests, requiring the algorithm to be very sure before deciding that something in the road was an obstacle or person instead of a cloud of steam or a plastic bag. Both of these things have been cited as difficult things for the system to classify. In May reports suggested that investigators were leaning towards this as the ultimate cause of the crash, though since then the blame has shifted more towards the driver who appeared distracted.

Like with Tesla Autopilot, it was down to the driver to make sure the system was working safely at all times. The element of trust has been picked up in other publications and in interviews with the Uber AV drivers. It’s even more dangerous in a system that’s pushing into level 3 because the driver can safely tune out most of the time. Their miles per intervention must have also been improving because the drivers described going hours without touching the wheel – though no mention of the brakes. Uber’s AV may or may not have required frequent intervention in the daytime, but at night with little but the well known road itself and the traffic signals, a driver could go an entire 8 hour night shift without ever hitting a snag. It’s easy as to lose track and trust the AV implicitly as it would be if they were being driven by a human.

Uber announced that they were conducting a safety review in the beginning of May – shortly after the reports of the investigators’ belief that the algorithm played a major role. They’d resume testing of AVs on public roads ‘in a few months’, suggesting that this was when they expected their internal review to end. This did little to turn the media’s focus away from the disabling of automatic brakes and the algorithm’s apparent lack of ability to handle something that happens most nights in any city. The day before the release of the NTSB preliminary report that I used to lay out the events of the collision itself, Uber announced that they would not be doing any testing in Arizona, even after they resumed testing elsewhere. Pittsburgh remained a major hub of development and testing, and they remained hopeful that they could make arrangements with California to return to testing there. Coverage of the preliminary report was similar to coverage so far, pointing out the braking issue, and some including a mention of Waymo’s future plans to have ride-hailing AVs ready in AZ by the end of 2018.


Next time, we wrap up with Uber’s attempts to recover their competitive advantage of being first on the road, including how they found something they’d needed throughout the investigation – a scapegoat.

Autonomous Car Accidents and Policy Implications – Part 5.2

In the last section of this article, we ended with the media subtly but firmly suggesting that the death of a pedestrian hit by an Uber AV was a failure of technology and of policy.  The AV START Act was postponed (something that might have been good for industry overall given the regulatory climate in DC right now), and Uber had suspended testing.

As the days went on, the bad news kept flowing, and two days after the NTSB arrived to begin investigating, a New York Times article went over a myriad of details which had come out prior to and after the crash. These ranged from how they’d planned on having a publicity event where their CEO was taken around Tempe by a fully functional AV in April (canceled, of course) to reports of Uber drivers falling asleep in their AVs. It also mentioned the difficulties faced by AV operators, like how they had to keep their hands ‘hovering’ over the wheel at all times, and pay attention to road conditions for hours in order to take notes. The notes were supposed to be taken only at stops, but often drivers did it while the vehicle was in motion. They contrasted this to Waymo’s system of having a button on the steering wheel which let drivers record short audio clips rather than having to navigate graphical interfaces and possibly type out notes. In fact, the article had several comparisons to make, always praising Waymo as doing the better job.

On the same day that this article came out, Velodyne (the company that makes the LIDAR arrays almost all of us use) made a public statement that they were positive that their sensors were not to blame. Instead, they said “the problem lies elsewhere”, a statement which suggests it was the fault of the AV system’s software. They later added that they believed that Uber’s decision to eliminate side-LIDAR arrays in favor of more radars and keeping its top-mounted 360 sensor as the only LIDAR on the vehicle meant it was less likely to see pedestrians. Velodyne’s representatives said that it created a 3 meter blind spot around the lower perimeter of the car, meaning that pedestrians and other objects which entered that space from certain angles could fail to be seen by the LIDAR. It’s important to point out that the radars and cameras appear to cover this zone and that it’s certainly an edge case to think of an object which enters the visual range of the car without passing through the LIDAR’s view, as well as being transparent both optically and to radar returns. It’s a tough sensor fusion problem when one sensor returns an object and another doesn’t, but if the object is in a space where the LIDAR cannot see at all, then the algorithm would probably be set to take that into account. More sensors are both more expensive and slow the algorithm down because it needs to process more data. While more LIDAR arrays may have helped in this case, it’s more down to the software than the hardware in my opinion. Teslas have only radar and cameras and they show the ability to stop for pedestrians. Additionally, in this specific case in Tempe, the Uber identified a need to stop with enough time to reduce speed to the point where authorities say it’s likely the pedestrian would have survived.

The rest of the AV industry was distancing itself from Uber, suggesting the inference that Uber was a single bad egg and the rest of the basket was doing just fine. Nvidia, which made some of the hardware that went into Uber’s AV – suspended their own testing voluntarily. Boston, which had suspended testing in the city in order to do safety reviews, let nuTonomy and Optimus Ride begin testing again a little more than a week after the crash. Waymo said their car could have handled it and Lyft said the driver should have been able to stop the crash.

Next time we’ll see how the regulatory environment evolved in Arizona to make Uber’s testing there possible, as well as the fallout for the governor who was instrumental in their dominance in the State.  Governor Ducey and Uber try to throw each other under the autonomous bus in part 3.

Autonomous Car Accidents and Policy Implications – Part 5.1

Wrapping up (for now) the series on autonomous vehicle collisions, we have the incident I’ve spent the most time studying and thinking about because it’s had such far-reaching implications.  It was such a big story with so many twists and turns that I’ve decided it would be better to release in sections, so here’s section 1.

The final post in this series (until something else happens) will come as no surprise to anyone with even a cursory interest in AVs.  The pedestrian fatality in Tempe; the only confirmed pedestrian death involving an AV.  I’m going to skip ahead briefly to the NTSB’s preliminary report to present the facts as they appeared on the night of the accident, and then go back to the beginning of coverage and discussion.  By going week by week and sometimes day by day, I’ll walk through the evolution of how a tragic death of a pedestrian turned into a whirlwind of change for one of the most prominent AV developers – Uber.

It was about 10 at night in March of this year when a woman was walking her bicycle across Mill Avenue in Tempe, Arizona.  Ignoring signs to use the crosswalk 100 meters away she walked across Mill Avenue in an area between the streetlights.  An autonomous Uber with a test operator was driving a preplanned test loop along the Tempe streets and had already completed one full circuit.  The car had been in autonomous mode for 19 minutes when it struck the pedestrian, killing her.  There was minimal damage to the Uber and no harm to the test driver.  One of the outward facing cameras captured the moments before the collision and an inward facing camera shows the operator looking down frequently at something before the crash occurred.  When the police arrived, the driver said she was monitoring the AV’s interface – a requirement for the job so that drivers can flag mistakes the car makes so the engineers can improve the algorithm.

Uber AVs have significant sensor coverage, including a top-mounted LIDAR.  The car’s algorithm went through several classifications of what it saw in the road, including another car, an unknown object, and finally a bicycle.  This is relatively normal since as it gets closer, the sensor image becomes more clear, and so classification becomes more accurate.  At 1.9 seconds before collision, the car designated a need for emergency braking.  Emergency braking having been disabled by Uber as a nuisance in autonomous mode and no alert system having been implemented, the driver was not warned and the car did not have time to initiate emergency evasion maneuvers which it has the authority to engage on its own. The car struck the pedestrian at 39 mph and the operator braked after hitting her, although she did try to pull the wheel over less than a second before impact.

So, those were the facts as they appeared to be on the day.  As the story unfolded, other developments changed the story, but this is what would have been known if all the data could have been extracted from the car on that night.  Before I go on, however, I should make it clear what kind of system the Uber AV is.  I’ve spent several posts now on Tesla Autopilot which is solidly in level 2 of autonomy where the human driver is still absolutely necessary at all times.  Uber’s system is level 3 – in theory – and so the driver is expected to engage in an emergency and in unexpected situations, but otherwise is not involved in the activity of driving.  As such, they have to pay attention, but rarely actually do anything.  In theory.  However, taking the action of braking (or at least, emergency braking) away from the AV and not having an alert may arguably kick the AV back down to a very high level 2 since it’s a key function that has not been given over to the AV system.

Now to the timeline.

The fatality occurred the night of March 18th.  By the 19th, the articles were already surprisingly vehement – in contrast to the articles written about Tesla collisions.  Some of this might be because it was a pedestrian that was killed, and some may be because Uber has a bit of a different reputation in business.  In the last couple of weeks, the shine has been taken off of Elon Musk, but four months ago Tesla was very well thought of and its leadership pretty well respected.  Uber, on the other hand, was respected technically, but in the wider media had taken a beating from reports of harassment and discrimination in the workplace, bans in multiple countries and municipalities over allegations of unfair trade practices, and disputes over whether Uber drivers are being given fair compensation for their work.  Uber was a high tech company with private investment funding and a stupendously high burn ratio.  It progressed fast, hard, and energetically to try to keep ahead and it often needed to attract more investment to keep going.  Its highest overhead was its drivers and it was going into AV in a big way to try to get ahead of Tesla, Google, and the big automakers.  In the new economy where owning a vehicle was no longer necessary, Uber wanted to be the go-to app to call a car.  That said, Uber had gotten some good press in the past after a collision in Tempe which flipped one of its cars over.  In this crash, it was the other vehicle that was at fault.  This spawned at least one opinion piece in Wired praising AVs and opining that we needed them as soon as possible to prevent more accidents like the one the Uber AV had been involved in.

Unlike with Tesla, many of the articles that came out in the aftermath of this fatality focused on regulation as much as technology and the facts of the case.  Rightly or wrongly the media narrative for Uber had gone poorly in the past, and being in the position of the villain in previous stories meant that they were in for mixed press at best even before much was known about the circumstances of the incident.  The day after the collision, the New York Times published an article which included a section on AZ regulation and the reaction of the Phoenix government.  TechCrunch included a quote from the California DMV, a subtle dig given the shared history with Uber.  Uber had previously been testing in CA, but had moved to AZ when they were ordered to stop testing on city streets without a permit.  Including quotes from the regulatory agency which had censured and blocked them in the past sends a message that Uber was going to have to prove it hadn’t acted recklessly and that California wasn’t right to push them out.

On the same day as these articles were being published, a joint letter from a variety of consumer watchdogs, disability advocacy organizations, and prominent advocates was sent to the committees considering the AV START Act.  The Act is a Senate bill that was introduced in September of 2017, and passed unanimously through the Commerce committee in October.  The House version – the SELF DRIVE Act – had passed already when this committee was having its hearing.  The letter was sent in March of this year and urged the committees and the Senate in general to delay further consideration until the NTSB had completed its investigation in Tempe.  It went on to ask that they make changes to the Act to tighten the rules on exemptions and require minimum design and reporting standards.  The letter also brought up the fact that the AV START Act would preempt State regulation on passing, even though the regulations of the Act would require time to draw up and implement, leaving AVs effectively unregulated for the period it would take the Department of Transportation to promulgate its rules.  The swiftness of this letter might have shown just how big the incident was in the public consciousness.  It wasn’t just Uber that was in trouble; it was the entire AV industry.

However, it might have also been a hasty rewrite, since on March 14, four days before the crash, five US Senators lead by the Diane Feinstein had written a letter explaining why they were voting against the Act.  This vote meant it would not be fast-tracked through the Senate and instead need debate before it could be passed and likely presaged an attempt to make changes to the Act to increase oversight and regulatory powers.  It was only to be expected; Senator Feinstein has expressed her conservatism (with respect to AVs) many times over the last couple years.  It was unlikely that AV START would pass easily, and with the Uber crash and the recall of Takata airbags that was happening at the same time, it became likely that it would not pass for months if ever.

This might seem like a setback for industry, but given that many States (including Arizona) have quite permissive regulatory structures, it’s hard to say.  The current situation puts the responsibility more in the hands of industry, so if something goes wrong they’re the ones on the hook.  Some States – like California – regulate AVs more than others, but the fact remains that most State governments are hands-off.  The flipside is that should the DOT be given the tools to regulate, it’s unclear what they would do with them.  The current administration is difficult to predict, and industry may be hoping the Act will pass later on when they can be more certain of what degree of regulation they’ll get.

Less than two days after the collision, Stanford posted an interview with a law professor about the liability and regulations surrounding the incident.  He noted that

“…if the safety driver failed to exercise reasonable care in avoiding the accident, Uber would be responsible for the driver’s negligence. If the automated features of the AV failed to note the presence of the victim, the manufacturer of the vehicle, as well as Uber, could be held responsible under product liability principles.”

He went on to point out that the pedestrian might share partial fault if they crossed in an unsafe manner.

The heat was turning up and Uber did the best thing it could in the situation.  It suspended testing on all public roads in the four cities they’d rolled out to so far.  This likely helped their image – as well as giving the engineers at Uber Advanced Technology Group (ATG) the chance to go over the incident with a fine-toothed comb without more data piling up.  On the other hand, more data for the algorithmic engineers might not have mattered since it was unlikely that they were the department that was about to get a lot more work.  It is a well known principle of machine learning algorithms that the negative set is usually nowhere near as large as the positive set.  That is to say, the examples of things to do right has to be many times more numerous than examples of what to do wrong.  In that way, humans and AIs are very similar.  It is very likely that this was not the first time an AV – or even an Uber – had been in this situation, and given the facts so far, this was already shaping up to be not as much an algorithmic as a human-robot interface issue.  After all, the car knew something was up – albeit only a second or so before the collision – but there was no means for it to warn the driver despite it being the driver who was expected to do something about it.

Tune in next time when the AV industry looks at one of their beleaguered colleagues suffering a situation that might shatter trust in everything they’ve worked years to develop … and says “who, us?”

Autonomous Car Accidents and Policy Implications – Part 4

OK, so we can finally stop whaling on Tesla after this post.  The reason they’re so prominent is because they’ve made themselves that way.  The frontrunner gets the glory but also the blame and I’m sure Elon Musk was very familiar with that by the time he founded Tesla.  The company’s ability to remain positive, focused, and well-liked in turbulent times shows that there are some very experienced PR people on their end.  Since their cases are the foundation of what has become policy and practice in the industry, they get a lot more attention than most.  We may be seeing that change as more AVs get on the road, but for now both confirmed AV driver fatalities and the suspected one in China were all in Teslas.  They also have by far the greatest number of publicized road collisions.  Naturally, they’d be the focus of 3 of the 4 parts so far and the special.

Just as a note, however, in California alone there have been 76 AV collision reports, most of them by GM.  CA is the only State with mandatory reporting for AVs, and these include a fair number of cases where the AV was hit while stationary, so there isn’t much to look at in terms of policy implications apart from the very useful data set it makes for those who want to cast AV developers in a favorable light.

Getting on to 2017 and 2018, then.  2017 was a relatively quiet year for Tesla.  Despite the lawsuit, Tesla got good press throughout the year.  NHTSA gave its SUV maximum score on all tests, including the rollover test which many SUVs don’t do well on.  The 2017 NHTSA guidelines came out – which I dealt with in the previous post – and they were very industry friendly overall.

So, 2017 was looking pretty good for them.  Then came 2018 and several more high-profile collisions.

The first was January 2018 in California and resulted in no injuries.  The car slammed into the back of a parked fire truck on a freeway, damaging the back of the truck, which was fortunately only occupied in front.  The truck, which was parked in the left hand shoulder while attending another accident, was struck from behind by the car traveling at 65 mph.  The NTSB investigation is reportedly ongoing, though no documents appear on a search on the NTSB site.  As such, it cannot be confirmed which version of Autopilot was in use during this crash.  It appears, however, that whatever version was in use, it had yet to solve the persistent technical issues which caused the crashes mentioned in 2016 in previous posts on this blog.

Next came in April.  An Apple engineer named Walter Huang was driving his Telsa down a busy highway in the HOV lane, which was second lane from the left.  The car began to follow a car ahead into the far left exit lane, but stopped in the middle of a gore area dividing the HOV lane from the exit lane as it went into a ramp divided from the highway by a gap and a crash attenuator.  To make it easy to visualize, I’ve gotten this Street View image from Google.

Mountain View Gore

The viewpoint in this picture is of a car in the exit lane.  What happened was that the Tesla made it as far as the area delineated by two solid lines and then remained there.  It accelerated from 62 to around 70 mph and then crashed into the attenuator (yellow object at end of concrete divider).  The attenuator had been damaged from a collision the month before, and so provided less cushion than it was supposed to.  It was, in fact, almost completely crushed already.  The Tesla was ripped in half and impacted two other cars before coming to a halt.  The battery was on fire for several minutes and Huang survived long enough to be put in an ambulance, but died in the local hospital he was brought to.

The Autopilot had provided several hands-off warnings during the trip, but none in the 15 minutes prior to the collision.  Huang’s hands were detected on the wheel for 34 of the 60 seconds leading up to the crash.  For 6 seconds, however, just before the collision, his hands were not detected.

Before we continue with the events as they unfolded, I’ll note that the preliminary report from the NTSB doesn’t show what the Tesla was ‘thinking’ during the period it started to accelerate.  I can take a few guesses from my experience with path planning and sensor fusion.  First, as well documented, Autopilot has difficulty registering stationary objects ahead of the car when at highway speed.  We can then assume that the attenuator wasn’t even a factor in its decision tree.  It either didn’t see it, or assumed that it was something the driver would take care of – as noted in earlier posts, when it comes to stationary objects, the manual tells drivers that it’s their responsibility.  Fine, then the question becomes: why did it remain in the gore rather than continuing to follow the lead vehicle?

It probably thought it was where it should be.  It detected an open lane ahead and was told to only move over one lane.  As soon as I saw that stretch of road, I said to myself ‘if I or most of the people I know had programmed an algorithm for this, it would probably mistake that gore for a lane.’  It’s really simple: looking at it you can see that the righthand white line was heavily weathered until it looked vaguely like a dashed line, but the righthand side is in relatively good shape.  It looks to any machine algorithm like a lefthand lane with a shoulder delineated by a solid white line.  It wouldn’t have followed the other car because it had already moved over once into what it perceived to be a lane.

But what if it did recognize both lines as solid?  Then it might have stayed because it was programmed to view crossing a solid line as illegal.  It had no option but to stay put because it believed it was in a stretch of road where it couldn’t move over.  This is similar to the first concept because again it can’t go left over a solid line, but this time it also can’t go right for the same reason.

Either way, it believed it was in a lane and since the lines on both sides pointed straight ahead, it had no reason to believe that the ‘lane’ terminated in a wall.  That’s just not part of the consideration on a freeway.

Moving on to what happened next.  There was the usual burst of news coverage, which likely would have died down if not for two big developments.  First, Mr. Huang’s family hadn’t stepped forward to say that he’d had issues with that stretch of road in the past.  They said he’d been to Tesla’s service center several times, and said that the car would try to drive towards the barrier 70% of the time he used Autopilot there.  Tesla responded that they had no record of the complaint and that they have an average of 200 successful Autopilot trips on that stretch of road every day.  Owners of Teslas online seemed to mostly side with the company, wondering why he’d even use Autopilot there if he had such persistent issues.

The news stories in major outlets were usually either neutral or positive with respect to Tesla and Autopilot, though even the positive ones sometimes noted that Tesla’s stock had fallen 15% after the crash.  It has more than recovered in the intervening months.

However, many of the news stories a week after the crash were written because Tesla had released data on the crash, including the facts that the car had been in Autopilot (at the time, it was only conjecture in the media that this was the case), that Huang had been given several warnings prior to the collision, and he had not been holding the wheel.  These facts which I cite above from the NTSB report rather than from Tesla’s releases, and which rightly should have come from the NTSB.  It’s understandable that as consumer confidence was shaken they would wish to mitigate, but it was an even bigger issue since Tesla had gone further than just their own data and had suggested that it was Huang’s fault for not paying attention, and that the crash’s severity was partially due to the damaged crash attenuator.  Preempting the NTSB by publicly releasing crash data is a big problem for investigators and releasing conclusions is almost unheard-of.  It undermines public trust in the investigation because some people will wonder why the NTSB takes so long, and in Tesla because they’re an interested party.  A conclusion that suggests no wrongdoing by the company was likely anyway, so they only look like they’re trying to cover for something by getting in before the Feds.  This is borne out by the preliminary report from the NTSB which I cited earlier.  Yes, Huang was given warnings and yes, his hands weren’t on the wheel 6 seconds before the collision.  However, these warnings happened 15 minutes before the crash and not in the seconds immediately preceding it.

The NTSB explained all of this themselves two weeks later when they revoked Tesla’s party status on the investigation.  Tesla, they said, had violated the party agreement by releasing the data.  They also noted that they expected Tesla to continue assisting them by providing data at request (only Tesla can pull the log files off their hardware) and that Tesla still had party status on the other open collision investigations (they mention the January collision, but there appears to be no record of it on their site).  Tesla, however, claimed that they withdrew from the agreement.  The revocation of party status is a big thing, though, because it’s a rift that may pose difficulties in the next collision.  What if Tesla refuses to provide the information?  Unlikely, but possible.  Also, with them out of the picture, the NTSB may release a more negative final report since Tesla won’t be able to mitigate by providing details during the process.

Not long after Tesla released its blog claiming that that stretch of road was safe for Autopilot, other Tesla drivers showed their own cars drifting towards meridians like the one Huang had hit, and in one case the very same crash attenuator.  There are some good discussions of radar on the reddit comments for the second video.  The behavior is consistent with what I’d expect from modern path planning algorithms.  The car sees the lefthand line disappear and believes it has drifted too far right or that the lane is curving.  It tries to find a lefthand line, does so, and that places it right inside a crash zone.  Or it tries to cross a zone like that, finds itself between two solid lines, and believes it is required to stay there until the lines go away.  It doesn’t detect the barrier because the relative velocities are wrong for anything its programmed to notice, or the sensors just don’t see the barrier for a similar reason.  Between the issues with AP1 and AP2, I’d say your best bet is to remain in center lanes whenever possible.  Tesla may have tried to paint an unrealistically positive view on this, but they’re right that all the AVs at present require supervision.  They’re not very open with examples of why, which might be the wrong approach.  Seatbelt safety was often achieved with PR campaigns showing what happens when you don’t use them.  It’s a narrow path.  Tesla doesn’t want to prime their customers with a belief that their Autopilot can’t do something now because it will almost certainly get past that issue later.

Believing that the Autopilot was at least in part at fault based on Huang’s (alleged since I’ve not seen confirmation) reports to the service center, Huang’s family filed a wrongful death lawsuit which is still ongoing and has not developed significantly since I expect all parties are waiting for the final NTSB report.

In May, Tesla accelerated into a firetruck while its owner was looking at their phone and not holding the wheel.  The owner broke her foot and there were no other injuries.

The Huang fatality was different, though.  It was different from all the other crashes that have made the headlines.  Not just because other owners showed that their own Autopilots might do the same.  It was different mainly for who was at the wheel.  The ‘operator error’ label wears thin when the operator is a software engineer at Apple who had previously been a Senior Software Engineer at EA.  This is someone who according to his family knew the limitations of Autopilot in the specific location it crashed and had experience with software design.  It is likely that though he may not have known the technical ins-and-outs of machine learning and path planning, he at least knew enough about software in general to know what kind of system Autopilot was.  The fact that he appeared complacent enough to continue using it the way he did suggests that there was something else at work.  It is very hard to say ‘he didn’t understand’ or ‘he should have known to pay attention’ under these circumstances.  We may never know why, but he trusted his Autopilot enough to leave it on and let go of the wheel in a location he knew was dangerous.

His trust and the fact that Autopilot’s safeguards appear too easy to either circumvent or ignore is an ongoing issue for Tesla.  The company’s aim is to try to introduce as few intrusive aspects as possible, making the Autopilot system flexible with the assumption that the driver is paying attention.  By making it possible to use Autopilot anywhere and actively opposing adding sensors to help track driver engagement, Tesla is also following its customers desire.  The ‘nag factor’ is low in Tesla because their customer base agrees with management – even to the point of trying to get past the nags already in place.  Tesla drivers are ready to take the risk.  If they weren’t, they’d be driving Cadillacs.

Speaking of which, we’ll be covering Cadillac’s system in the next post, and then on to part 5 of this series where we look at another high profile collision of 2018 (there sure have been a lot!), this one especially unique for being the only recorded incident of a pedestrian fatality with an AV rather than a driver.

But before I go, one interesting note.  The NHTSA put out a cease and desist order on the Autopilot Buddy.  This is interesting because the DOT in 2016 – as I covered – claimed not to have the regulatory tools to cover AVs.  In this case, they justify their authority by calling it after-market hardware, and justify their decision by saying it disables a safety feature.  A safety feature that they don’t have the authority to regulate.  I’m sure the company that makes these items doesn’t have the pockets for a court case, which is a shame because I’d love to see how that argument played out in an extended battle.  Can NHTSA regulate hardware?  Yes.  Can they regulate hardware which has the sole intent of altering the functionality of non-covered software?  I don’t know.  After all, if Tesla itself disabled the safety features and went back to hands-free and attention-free Autopilot, could NHTSA or NTSB order them to put it back?  If not, can they really claim to have authority here?  As long as Congress remains gridlocked on giving them the tools – and the administration remains disinclined to ask for them – it’s a distinctly gray area.

Autonomous Vehicle Regulations 2016-2017

I talked some about the 2016 NHTSA policy overview in part 3 of the collision series.  They were positive towards the industry, but also covered all the regulatory tools that NHTSA had and would need to have going forward to keep the roads safe as cars transition from human controlled to fully automatic driving.  In 2017 the updated NHTSA guidelines  came out and looked pretty good for all AV makers, the smaller ones especially.  To start off with, the changes were minimal.  There were a few more paragraphs in the guidance section and the phrasing was marginally more industry friendly, but ultimately very little changed year-to-year in the industry guidance section.

The important changes came in other sections.

In the model State policy, the change between 2016 and 2017 was quite noticeable.  Instead of being about ‘consistency’ of regulations across multiple States, there was a much more AV friendly approach.  The 2017 document has phrases like “States should not place unnecessary burdens on competition and innovation..” and a very interesting sentence:

For example, no data suggests that experience in vehicle manufacturing is an indicator of the ability to safely test or deploy vehicle technology. All entities that meet Federal and State law prerequisites for testing or deployment should have the ability to operate in the State

2017 was indeed quite a good year for the startups working on AV technology.  There was also a subtle removal of sections suggesting NHTSA would help States with their legislative policymaking, though since the entire document was streamlined that may have just been removed to simplify things.  The removal of almost all of the ‘law enforcement’ section and its mentions of distracted driving and how States should consider regulations dealing with inattentive drivers of level 3 and below vehicles is a strong indication of the new Department of Transportation’s priorities.

The biggest difference year-to-year was the document’s handling of regulatory tools.  In 2016 there were 20 pages about NHTSA procedures and how they affected AVs, and another 14 pages about what new tools they would need from Congress in order to effectively regulate.  In 2017, there was only a single block at the beginning (less than half a page) stating NHTSA has ‘broad enforcement authority.’  The lack of any mention of current regulatory tools or what tools may be necessary for NHTSA to regulate AVs effectively is suggestive that the new administration does not feel that enforcement and regulation are necessary.

Stateside, most States were maintaining either a friendly or hands-off approach.  California required that the legislature be informed if manufacturers requested a license to test a level 4 or above AV, but also allocated funds for connected infrastructure.  Connecticut actually mandated that at least 4 municipalities start testing AVs.  Illinois headed off attempts by cities and towns to regulate AVs.  Nevada did the same and even allowed testing of level 4+ AVs.  North Carolina abolished the requirement for a driver’s license in level 4+ AVs.  Texas abolished the requirement that an operator be present inside a level 4+ AV.

New York was one of the only places where things didn’t go so well.  Testing was allowed, but required all tests and demonstrations be conducted with the supervision of the State Police.  Luckily, no one has yet defined ‘test’ to include the data collection done by Tesla vehicles.


Autonomous Car Accidents and Policy Implications – Special Lawsuit Edition

Welcome to Autonomous Car Accidents and Policy Implications.  Today we look less at a collision and more at the fallout of some of the decisions that have likely caused them.  Pausing for a moment between the best known Tesla collision of 2016 and the collisions in  2017, we examine an attempted class-action lawsuit which did a bit of financial damage but barely scratched Tesla’s publicity machine.

When we left off, there had been a Tesla fatality which was ruled driver error.  Investigations had been completed and Tesla had rolled out an update which made it so that drivers had to keep their hands on the wheel more consistently and if they didn’t, the car would have to be completely stopped before it would allow the driver to turn Autopilot back on.  So far, so good.  The problem for Tesla was that while regulators were effectively unable to act (lacking the authority to command) and investigators always returned ‘driver at fault’ (since all the warnings were in the Tesla manual), the customers were beginning to get restive.

During the interim period between the problems being identified and new software and hardware being introduced to fix it, Tesla had been forced to reduce the maximum velocity for Autopilot to 45 mph in January of 2017 which was raised incrementally back to 80 mph in March.  These tweaks seemed to be the last straw for the owners of Teslas, who were starting to feel less like owners of cars and more like unpaid experimental testers – which is pretty much what they were.  Tesla runs its software division like any Agile software company.  It tests new software on the users’ machines, sees how it works, and tweaks based on feedback.  The price of being an early adopter is that tweaks are usually more extensive, and often the system goes into a degraded state while safety critical updates are being tested.  More on that idea when we discuss Uber’s AV, and its ‘free’ rides.

The lawsuit was an attempt to form a class-action alleging that owners and lessees of Tesla cars had paid a premium price for a car with AV capabilities and that the capabilities in question were either nonfunctional or dangerous.  It should be noted that the advertisement they used as an example was for HW2, meaning that their complaint was for cars manufactured after the one used in the fatalities explored in my previous post.  The lawsuit focused on the $5000 cost of unlocking the Autopilot software pre-order, or $6000 post-order.

The gist was that they were accusing Tesla of deceptive advertising.  They claimed they’d have version 2 Autopilot (AP2) ready by the end of 2016 when they knew they couldn’t.  Further allegations were that the software they pushed was dangerous and far less capable than version 1, which they covered for by having customers test driving version 1 Autopilot cars and then telling the customers that AP2 would be much better than the experience they had.

Let’s go allegation by allegation now.

First allegation: AP2 at the time of rollout was dangerous.  They cited several tech journalists who owned one or more AP2 Teslas which swerved unexpectedly, merged across yellow lines, braked hard at bridges mistaking them for obstacles, and didn’t brake at all at red lights.  It’s unclear whether these examples were cherry picked or indicative of the full experience (let’s face it, an article showing how bad a Tesla is will get more views than one showing a good experience).  The allegation that there were difficulties with the software of AP2 was supported in the lawsuit by the fact that only 2 of the 8 cameras were currently being used by the software, and that AP2 had been forced to start from scratch because Tesla had lost the rights to AP1.  Quoting from the lawsuit which quoted from an article at the time:

Mobileye, the Israeli company that supplied the original camera and
software for Autopilot, cited safety concerns when it pulled out of its
partnership with Tesla. The company’s chief technology officer told
Reuters that Tesla was “pushing the envelope in terms of safety …
[Autopilot] is not designed to cover all possible crash situations in a
safe manner.” Tesla says the collaboration ended for commercial reasons.

They also cited numerous incidents and issues that the plaintiffs themselves had, including lack of collision warning, veering into the wrong lane even at low speeds, and braking with no warning or apparent cause.

Second allegation: The marketing materials and practices were intentionally deceptive.  This is a harder one to prove and relied on the plaintiffs’ own interpretation of their experiences.  The lawsuit, then, would have to demonstrate that any reasonable person would believe as the plaintiffs did:

  1. That the AP2 update would come as one rollout in December 2016 with full functionality in a single package.
  2. That AP2 was built on AP1s software rather than a replacement from scratch.
  3. That when activated, AP2 would be safe to use in the intended environment.
  4. That (in the case of one plaintiff) this functionality would be a fully functional AV (level 3 or above)

Third allegation: Tesla violated the Motor Vehicle Safety Act.  The concept being that an AV system with demonstrated and consistent faults constitute a safety defect under the Act.  Tesla was never charged under the Act and the author of this blog believes that this is due to the fact that at the time of the lawsuit there was no regulatory body empowered to oversee the software portion of AVs.  Defects had to impact a car being driven by a human or else they were outside the remit of the regulators.  That’s why all the ‘driver error’ results from investigations have effectively cleared automakers of liability.

Tesla, as might be expected, refuted all claims at the time.  Their defense was, in brief, that they had not promised the full package by December 2016 but to start updates at that time.  They claimed that the safety concerns were sensationalized, and that several of the functionalities mentioned as missing had in fact been provided.  They said that the systems were a beta test, and that drivers should not ‘abdicate responsibility’ to the Autopilot.  This year Tesla settled for $5.4M to all owners of AP2 cars, with payouts ranging from $280 to $20 depending on how long the drivers had owned their vehicle.  The settlement was based on how long the owners of the cars had waited for functions that were delayed, and the judge and attorneys said that they were confident that by September 2017, all of the main functions had been implemented.  Tesla continues to claim that there was no wrongdoing in their practices, and the settlement effectively ends any private investigations into the matter.

Here’s my take: This wasn’t about deceptive advertising, it was about the boundaries of live testing of software functions.  As mentioned earlier in this post, it’s a common practice to incrementally release software, and for live user experience to contribute to testing the effectiveness of the software.  Machine Learning algorithms require enormous data sets to function well, so there’s an added incentive to roll out AP patches and functions even if they’re not quite ready yet, because it feeds the algorithm what it needs to work well in the future.  What’s happening here, however, is that this practice isn’t being used in a web app or a program on a personal computer, but instead in a safety critical system.  If software on your computer has a bug or something isn’t working quite right, you send a complaint or a log to the developer and they use that information in their next patch to fix it.

This lawsuit was to try to say ‘not here.’  That live testing to fuel incremental and algorithmic software growth won’t work.  Cars are too dangerous.  The lawsuit was a cry to the authorities to do something because the average consumer wasn’t technically aware enough to understand what Tesla was saying, and that the practice of buying software that will be beta tested by the early adopters was not the right model for this industry.  Consumers who buy alpha and betas of games on Steam expect issues and are there to provide feedback on them, but there isn’t an expectation like that in the car industry.  That was what it was really about, and it failed.

This is, long term, probably a good thing.  It’s also very much a short term bad thing.  That’s technology, though.  The problem is that it’s expensive to exhaustively test software that’s this complex, especially software that’s heavily based on current ML algorithms.  Tesla hasn’t got the deep pockets of Google (which has been testing its AVs since before Tesla Autopilot version 1 was released, and won’t be starting its AV taxi service until 2019) or the established consumer base of a big automaker (I’ll be covering Cadillac’s new AV in a future post).  That’s why Tesla and Uber have to use the classical Agile incremental approach to software updates.  They both rely on automotive innovation to function and can neither afford to look stale by letting another company (GM to Tesla, Google to Uber) introduce their product first, nor sink millions into comprehensive testing and data collection which could take years and still leave them with fewer functions.

The message from this settlement is clear but impermanent: You’re buying your car from a silicon valley style innovation engine.  Expect them to act like one.  All new functions are beta, all users are testers.  You’re getting it early and that prestige comes with a price: it’s your data that will be used to build the next version and the one after that, with the older functions becoming stabler and newer functions becoming possible.  Eyes forward and hands on wheel, because if something happens, it’s only a level 2 AV and you’d better be ready to take over.

Speaking of expectations and knowledge, join me next time for a collision which strained the argument of ‘unsophisticated and ignorant’ driver error further than ever.