Autonomous Car Accidents and Policy Implications – Part 3

Welcome back to the series on how accidents affect the environment of autonomous vehicles and what has changed over time.  I’ve been walking through these examples in order of how severe the reaction was, so now it’s time to look at fatalities and injuries – specifically Tesla Autopilot fatalities and injuries.  The first fatality in the US is likely why the NTSB and NHTSA takes autonomous collisions so seriously that they opened an investigation on that bus from example 2 – the one that was guilty only of being too cautious to back up when a truck hit it while stationary.  The next couple of accidents are unfortunately the opposite.

The Tesla Autopilot has had an outsize share of the injuries and deaths in the industry, but that’s because it has the largest share of use as well.  I should also note that this series is specifically looking at the reactions to and policy implications of collisions.  Even as reports of collisions both fatal and non-fatal were being published, so too were articles about Autopilot stopping one collision from happening (illustrating the safety of AVs perfectly by showing that the AV was able to stop hitting the car in front of it, but the car behind the AV wasn’t able to stop in time) and of an Autopilot system preventing the death of a pedestrian.  I include these to show that AVs do get good press even during emergencies, and that in the same time period as AVs causing harm, they were doing good and have continued to do so in subsequent years.

Before we go back to May 2016 and the first US fatality, let’s go back to January of 2016 and look at the first known fatality involving an AV – also a Tesla.

The situation was this: A young man, who reportedly had read the Autopilot manual multiple times, is driving down the highway in northeastern Hebei in the left lane.  A street sweeper truck is moving in the opposite direction – I’m not familiar with how they operate, but it appears as if the sweepers are there to clean the shoulder, which requires them to drive in the opposite direction of the lane’s traffic and be about a third of the way into the left lane of the highway.  As such, drivers in the left lane have to watch for them and move into the right lane when they see them coming.  The car in front of the Tesla does this, but the Tesla doesn’t, and strikes the sweeper – smashing the car and killing the driver.  All of this is captured on a dashcam.

There is circumstantial evidence that the car was in Autopilot, but Tesla said that there was too much damage to retrieve the logs.  The father of the driver and owner of the car has been in litigation ever since.  Multiple third parties support the assertion that the car was in Autopilot, and from the perspective of how this plays out legally and in the media, let’s assume it was.  The manual did state at the time that Autopilot struggles with stationary (and in this case, we can assume it would have the same problem with an object traveling towards the car) objects when traveling at speed.  In fact, looking at the two non-fatal collisions mentioned in the third sentence of the second paragraph, both of those collisions occurred when the Tesla hit a car which was stopped in the left shoulder but protruding into the left driving lane.

As a programmer, my guess is that – given that the Tesla in question was a Hardware 1 model – an object like a car a third of the way into the left lane would not be picked up by the forward camera and radar.  It’s hard to work out the precise angles without doing a lot of geometry, but eyeballing it, the range on the forward looks almost good enough to pick up these protrusions, but they might clip the very edges briefly and end up being classed as a glitch rather than an object at the far periphery.  Of course, by the time the ultrasonic saw what was going on, it would be too late at 50-70 mph.  If you look at where the cameras and sensors in the hardware 2 version could see and assume that HW1 was no better, then it is quite likely that objects on the side of the road were simply not in range of any sensor until the car was very close.  HW 2 has improved this greatly, but there remain blind spots and errors as we’ll see in later examples.

This means that Tesla is correct that in most of these cases the system is operating exactly as designed.  As mentioned in example 2, that’s enough to cover liability, but it isn’t enough to satisfy the public if things go wrong – and fatalities are in the extreme end of the wrongness spectrum.  The only thing worse than a driver fatality from the perspective of the public is a pedestrian fatality.  But let’s leave the sole known incident of that for a later example.  In China, the lawsuit continues, as do the investigations.  We may never know exactly what happened, and I admit to being unfamiliar with the policy intricacies of the country.  Let’s instead look at the first US fatality in an AV and the first confirmed case of a fatality in an AV while operating as such – at least until the Chinese investigation is complete.

Williston, Florida.  A former Naval ordnance disposal technician who has previously published videos of him with his hands off the wheel while in Autopilot, and who has reported that the Autopilot has saved him from accidents is driving at a reported 74 mph down US-27A (for the record, 9 miles above the speed limit.  Not a lot, but worth mentioning).  A tractor trailer truck proceeding in the opposite direction makes a left turn across the lane that the Tesla was in.  The Tesla does not stop, and goes under the truck, shearing off most of the roof of the car, then hits a lamp post.

From a technical perspective, this was a totally different case.  Instead of an obstacle being outside of the sensors’ range or the code’s ability to handle, the crash – from a purely technical standpoint – was the result of a false negative, and a situation outside of the parameters of the system.  It works like this: Most systems have multiple sensors that can observe in the same area.  In the case of a Hardware 1 Tesla, it was the camera and radar.  In other cars – including HW 2 Tesla – it’s different, but we’ll use the hardware relating to this example.

The difficulty was that the radar got a return off of the trailer – that is, the signal bounced and returned to the radar telling it an object was there.  However, the camera saw nothing.  This was because the truck was white and there was a lot of glare, causing the truck to appear to be part of the background.   Any chunks that weren’t white were too small to be considered obstacles and discounted by the system as dirt, small objects, or nothing at all.  Since the camera reported no object and the radar did, the system erred in favor of the camera.  The radar’s return, moreover, showed an object further up than most cars, so the system at most read it as a low highway sign.  Thus, the system continued as if there was nothing there because the sensors’ combined result appeared to be something too far up to trigger collision avoidance behavior.  In the aftermath, Tesla announced that they were going to rely more on the radar returns than the camera for collision avoidance, and re-tune the algorithm to make it less likely to mistake high trailers for overhead signs.

There were, of course, headlines and news reports with the expected degree of condescending skepticism about all new technology forever.  We can take that as read whenever something goes wrong.  For the most part, the media’s general approval of anything Elon Musk does has protected him and this situation was no different.  The modern heroic tech billionaire is an entirely different article, and one I might do in future.  Despite speculation that the fatal collision might derail progress, Tesla itself maintained its image and released an update that September, two months after the crash, which made it so that the Autopilot system would lock out a driver who wasn’t paying enough attention to the road.  This update was almost certainly a reaction to what Tesla already knew from their logs and which regulators were no doubt going to comment on.

What was happening was this: You’re not supposed to let go during Autopilot.  Especially in 2016, Autopilot was very experimental and even now drivers are warned in the manual to keep eyes on road and hands on wheel.  The car is programmed to check this periodically and tell the driver to put their hands back on the wheel.  In v7 – what the driver in this collision was using – all that happens is a sign comes up on your screen and then the speakers let out a low beep until you put your hands back on the wheel.  Thus there was no penalty for the driver in this case, and only a minor annoyance which could be removed by jiggling the wheel every so often.  It’s debatable whether the v8 update was enough given that jiggling the wheel was still how the vehicle registered engagement, but we’ll cover driver engagement tech later.  This won’t be the last time a driver’s lack of attention contributed to a collision.

Coincidentally, that September the NHTSA and DOT issued new policy guidance with respect to AVs.  It’s important to note that guidance is about all they can do without federal legislative action, which is unlikely in the current political climate.  However, the guidance included model bills for States, suggesting legislative actions that could be taken by each State while federal action is pending.  It noted in the guidance where current regulatory tools could be applied and what new regulatory powers they would need from Congress to most effectively monitor and regulate AVs.  This document was likely in the draft phase already, but expedited by the furor about the fatality in Florida.  It is unlikely, however, that the content was materially affected given the timeframe.  I say this because driver engagement monitoring was mentioned in only two places, and in both cases advised automakers to consider whether they were appropriate rather than stating outright that they were.  The overall theme of the document was recommendations, voluntary information sharing, and visions of how future regulation would look.

It should be noted that most State laws as of 2016 fell into 3 broad categories: Encouraging development, defining and investigating regulatory needs, and truck platooning legislation.  The only State in 2016 that had serious regulation was California, which had enacted its requirement that the Highway Patrol regulate AVs in 2012 and had only tweaked the laws since then.  Most laws were favorable, many allowed for AVs to operate without a driver, and Pennsylvania allocated money to fund the development of AVs.  There was no spate of increased regulatory burden after the collision, and if there was increased pressure to do so, it paled in comparison to the incentives to leave AVs alone or even encourage them.  As for the model bill guidance from NHTSA, the introduction includes this:

DOT strongly encourages States to allow DOT alone to regulate the performance of HAV technology and vehicles. If a State does pursue HAV performance-related regulations, that State should consult with NHTSA and base its efforts on the Vehicle Performance Guidance provided in this Policy.

In other words, “We’re going to get the regulatory tools eventually, so you’re better off regulating the way we’re going to.”  Which is sensible, since federal regulations usually override State ones, and so it only creates instability and expense for the States to disagree and then have to comply with the new rules when they’re enforced.  I’ll cover regulation itself in more depth later, but I can’t help but note that one of the regulatory tools NHTSA was asking for from Congress was the ability to increase the number of exemption vehicles, currently capped by statute at 2,500 per automaker per year.  This is because the algorithms we currently use for machine learning require a colossal number of examples and tremendous data to function well, so the more vehicles collecting information, the better they all run.  We’ll see how some consumers were less than pleased about this in the next example.

In January of 2017, NHTSA put out a report.  Despite the headline “Tesla’s Self-Driving System Cleared in Deadly Crash“, the report was an investigation of Tesla’s Autopilot as a whole.  While the 2016 crash was included as part of their data, the NHTSA was looking specifically for defects in the Autopilot system.  They found none; in every case, even those resulting in collisions, the system functioned as designed.  NHTSA even noted that Autopilot had decreased collisions by 40% in Tesla cars.  As we’ve seen above, NHTSA is very limited in what it can do right now because it doesn’t have the mandate to oversee the specifics of AV technology, only to make sure that it works as intended.  As such, their conclusion was that Tesla was clear, however they did note that improved consumer awareness of how the system worked was always helpful.

In September of 2017 – a year after the update was rolled out and 14 months after the crash – the NTSB issued a report and suggestions for regulatory guidance.  The report was pretty much that the driver was at fault.  Records indicated that the driver held the steering wheel for approximately 25 seconds over the course of 7 intervals during the 40 minutes before the crash.  The previous NHTSA study found that the driver should have had at least 7 seconds before the crash occurred to take action, but the brake was never pressed.  NTSB also noted that Autopilot at that speed is only meant for highways with ramps, not cross streets.  Again, the same refrain.  The system worked as designed.  No defect.  The subtext: Operator error.  A distracted driver using a system that wasn’t meant for the highway he was on and a truck driver who failed to yield right of way.

That’s what the conclusions seemed to say.  There were some interesting sentences in the analysis as well.

“monitoring steering wheel torque provides a poor surrogate means of determining the automated vehicle driver’s degree of engagement with the driving task.”

In other words: Tesla’s method of telling whether a person was paying attention to the road by making them jiggle the steering wheel ought to be addressed.

Toyota has deployed an eye-tracking system on its Lexus brand vehicles, and
Volvo has announced plans to use eye-tracking technology in its Driver State Estimation system.  The Driver Attention System on the 2018 Cadillac CT6 Super Cruise vehicle uses a small camera located at the top of the steering column; the camera focuses exclusively on the driver and uses infrared light to track the driver’s
head position.

The message: Other carmakers have done it, so why haven’t you?

The hands-off warning interval has been under consideration elsewhere in the world, particularly in Europe. The United Nations Economic Commission for Europe
adopted a new regulation pertaining to hands-off warning time in lane-keeping systems. This regulation, which is expected to go into effect in the first quarter of 2018, would require lane-keeping systems to provide an initial visual warning after 15 seconds of hands-off driving and deactivate in a controlled manner after 1 minute of hands-off driving.

Tesla did fix this one voluntarily before the report came out, so this sentence is aimed more at lawmakers and NHTSA than at Tesla.

If automated vehicle control systems do not automatically restrict their own operation to those conditions for which they were designed and are appropriate, the risk of driver misuse remains.

A shot across the bow to everyone: Create better ways to make sure AV systems are only used where appropriate.  The Autopilot, for example, wasn’t supposed to be used on a highway with crossroads.  The simple fix: Have the Autopilot communicate with the GPS and make sure that the car is on a road where it can safely operate.  If there is anything a State or local government might do, it will be to institute ‘no-AV’ roads.  Or ‘AV only’ lanes when these cars become widespread.  Like speed limits and limits on where trucks can use their engine brakes, municipalities will identify stretches of road where it is infeasible to use an AV below a certain SAE level.  (For those of you who don’t recall the very brief mention in an earlier article, the levels of autonomy are what NHTSA uses to class the capabilities of AV systems.  Most consumer vehicles including Tesla are Level 2: Limited turn and throttle control, driver must be engaged at all times.  Level 5 is where the car is assumed to be at least as capable of driving itself as a human driver).

Then came the recommendations.

To DOT and NHTSA they said that the parameters for data collection needed to be set.  The upshot being that at the moment if they want the data from an AV, they need to get it from the manufacturer.  Most AVs have a form of black box log of what the vehicle was doing, but only the company that made it can access it.

To NHTSA in particular, they recommended guidelines on making the systems more restrictive on when they could be used.  “…incorporate system safeguards that limit the use of automated vehicle control systems to those conditions for which they were designed.”  So that rather than relying on each owner to know the parameters of their vehicle’s ability to operate, the vehicle will only operate when it’s supposed to.

Then to automakers and their representatives, they recommended robust driver engagement monitoring, the system safeguards mentioned to NHTSA, and industry-wide advocacy in those safeguards.

While Human-Computer Interaction standards have been evolving for decades, there is no industry-wide practice nor regulation saying how an AV system is supposed to interact with riders, and any situation in which the AV system is used in a context it’s not supposed to be – like using a driving system meant only for interstates on a highway with crossroads – it’s currently the driver’s responsibility to know the limitations of the vehicle.  How much of this knowledge burden is acceptable is still up for debate.  According to an internal study, 20% of questions asked on the Subaru help line are answered in the manual.  It’s a constant headache for engineers to make systems that are understandable and robust against abuse, and even moreso for safety critical systems like cars.

Even if the legislation giving them the authority to regulate came through tomorrow, it will take years to find the balance.  Too many messages are distracting and annoying.  Having the car refuse to go into autonomous mode could cause the very distraction that it’s trying to avoid.  Filling a car with cameras to make sure people are watching the road is a big privacy risk, and the NHTSA guidelines even acknowledge this.  On the flipside, not everyone is going to pay attention to the instructions.  People will drive badly even when they aren’t driving!  Until the AVs can take over completely, there will be risks to mitigate.  How we do that is going to be a very long and difficult negotiation both internally and between carmakers and regulators.  A fatality always brings attention, and at least in this case there was no direct contraction or expansion in progress on AVs.  Just some analysis suggesting a way forward while regulators wait for legislatures to catch up.

Join me next time for consumer blowback and the answer to a very important question: Have improved software and interfaces helped combat distracted Autopilot drivers?  If you’ve been watching the news lately, you know the answer already!

 

Autonomous Car Accidents and Policy Implications – Part 2

Welcome back to the “driverless vehicle gets into an accident, then what happens?” discussion.  For our second example, let’s look at how even when an autonomous car is completely stopped, it’s news when it’s in an accident.

Google/Waymo cars have been hit at intersections often enough, but that hasn’t hit the news in quite a long time.  Instead, we’ll be looking at the Hop On shuttle, a Navya bus which was sponsored by AAA, the Regional Transportation Commission of Southern Nevada (RTC) and Keolis North America, a company which provides rail and bus services in the US.  Rides are free and the route is a small loop of 3 blocks one way, turn, one block, three blocks the other way, turn, and then back to the start.  The vehicle is an autonomous bus with no pedals, or wheel, but it does have an emergency stop and joystick inside, as well as interior cameras for remote operation.  One of the fun features is that the bus is symmetrical, so it doesn’t ‘back up’ so much as just reverse which direction it considers ‘forward.’

A fun, free demo, with minimal risk, although there is actually no ‘driver,’ only the attendant who can stop the vehicle in emergencies and do limited piloting.  The attendant is mainly there for emergencies as mentioned before, and in cases where the bus hits a situation its software hasn’t been set up to deal with, like broken traffic light.  I mentioned the issue of deploying a completely driverless vehicle in the previous post, though in the context of unrestrained city and highway driving.  In the case of an 8-intersection loop running only during the daytime, it’s unlikely that there would be any serious issues, even though the algorithms are mainly optimized for highly controlled and limited situations like this one.  I believe that most of these buses are running as shuttles on private property.

What occurred was that on the first day that the shuttle ran (11/8/17) a truck was backing up and didn’t see the bus.  There was traffic behind it, so it couldn’t back up to try to avoid the truck.  (I should mention, however, that witnesses say that there was more than 5 meters of space behind the bus, so it’s likely that it could have backed up, but was prevented from doing so by a safety precaution that limits how close it can get to other cars.  These kinds of ‘boxes’ are quite common, and a result of understandable caution on the part of engineers, stemming from sensors not always getting distances quite right.)  It even blew its horn and didn’t get a response!  The truck hit the bus, denting the fender of the bus but with no injuries or harm otherwise.  At no point did the bus move during any of this.  Thus, it was ruled the fault of the truck driver (who was even cited with a misdemeanor) and if this were not an AV, the collision would have been just another day in the city.

It was an AV, however, and led to journalists and major news media outlets like the Washington Post producing headlines like “A driverless bus got into a crash during its first day on the job” and at the Guardian “Self-driving bus involved in crash less than two hours after Las Vegas launch” .  In both cases, the articles themselves were entirely factual, but the comments on the articles suggested that there were quite a few people who read no further than the headline.  It also resulted in an NTSB investigation.  I have tried to find out what the result was, but have been unable to find a reference to it on the NTSB web page.

While for the most part there was no result apart from outrage-inducing headlines, there is a good question that was brought up during this process and then lost in the usual fight over whether AVs are good things or not.  It’s that – as I have noted before – there are no well defined and adopted standards in the industry.  Is 20 feet (the distance between the AV and the car behind it) too much or just enough distance that it decided it couldn’t back up?  Was endangering the occupants by inaction the right thing, or should it have taken the small risk of action and backed up with the possibility that its sensors were wrong and the car behind it was closer?  Jim McPherson, a lawyer who has set up a firm to deal with AV law brings up the excellent point that the bus avoided liability by not moving, but by doing the perfectly legal thing, may not have done the safe thing.

The difficulty for AV companies is real; the Google car from example 1 got into an accident shortly after they implemented a function which mimics human drivers but has nothing to do with classical collision avoidance or rules of the road.  The current wisdom is to protect the occupant of the vehicle and to follow the law.  The first is because anything outside of the vehicle is seen through fallible sensors, and so the only human the car knows exists is inside, so if given a choice it will always act to preserve that human.  Following the law perfectly, meanwhile, may not save the most lives and money, but it means that the makers can never be held legally liable.  It may not seem very nice, but they’d argue that they have to protect themselves, and if lawful behavior isn’t for the best, then change the law; don’t expect an axiomatic system like a machine intelligence to bend the law because if it does it once it’ll do it again and perhaps in less beneficial circumstances.  Better to err on the side of over-caution and have larger safe distance boxes than to possibly run into something or someone.  Sensor ghosts are real problems and sometimes aren’t even ghosts, as we’ll see in the next two or three examples.

Autonomous Car Accidents and Policy Implications – Part 1

Autonomous vehicles are a hot item these days.  When I started graduate school, examples using autonomous cars were always the go-to thing in business coursework and in a lot of robotics courses.  It makes sense.  Up until very recently, robots were indoor and industrial, or else low autonomy pets.  Autonomy and the algorithms that make it possible just wasn’t ready for the hectic, difficult environments of field robotics (field robotics per the Field Robotic Center at Carnegie Mellon is “… the use of mobile robots in field environments such as work sites and natural terrain, where the robots must safeguard themselves while performing non-repetitive tasks and objective sensing as well as self-navigation in random or dynamic environments.”

Autonomous vehicles are by no means new to the robotics industry.  We’ve been putting them out there for the military for over a decade.  What changed in the last 5 years has been sensors, algorithms, and processing power.  It’s now feasible to fuse multiple sensors’ data into a single environmental view and then process that data to classify features and vectors and use algorithms to say ‘this is a stop sign’ or ‘this is a pedestrian.’

Now every startup and their brother wants in because we reckon every car will be autonomous in a few years and that’s a BIG market to capture.  Not just a big market, but a lot of work!  It may seem as if having tens of startups and twenty or thirty major companies all competing for the space would mean few winners and a lot of losers, but you only need to come up with one novel piece, one good solution to a single previously intractable problem.  Then you sell it on and make a lot of money from the big players.  This isn’t the dream, but it’s a good goal.  The dream, of course, is to find a general solution and beat the big players by having a system that works well and works all the time.

But I digress.

That’s a quick look at the business and tech side – and more on that as we continue – so let’s have a glance at policy.  To do so, it’s good to go through the examples of news items in the last year or two involving autonomous vehicles and how they’ve been misbehaving.  Good press is important, but bad press is what can turn a utopian dream of the car of the future into another Dymaxion.  I’ll be doing the examples in order of severity (which, funnily enough is almost chronological from earliest to latest news) and talk about what went wrong and where in the range from the unstoppable progress of an oil rig to the complete obliteration of an industry like the Hindenburg the items lie on.

Example 1: Google Car Fender Bender with a City Bus

In 2016, a public bus collided (video) with a self driving Google (before they renamed that unit Waymo) car.  As you can see from the bus dashcam, the bus was driving and the AV (autonomous vehicle) was hugging the righthand lane.  The car moves into the driving lane and the bus hits it.  Bus going at 15 mph and AV at 2 mph, causing minimal damage to the car.  This was hailed as a crash with “serious implications” in tech news, but the company and experts felt that it would barely hinder development at all.  Both views were ultimately right, and with good reasons.

The “implications” part was true.  It was the first time an AV could be considered to have caused an accident by direct action.  Up until then, most accidents with AVs were rear-end collisions caused by the AVs stopping at stoplights when the person behind was either not paying enough attention or expected the AV to keep going.  This is because AVs are trained to follow the law very carefully.  There’s a lot of fudge factor in how humans deal with stop signs and traffic lights, but an AV will always (excepting in the inevitable edge cases) come to a complete stop at all stop signs and stop on yellow at traffic lights.

How, then, did this car end up doing what it did?  Fudge factors baked in.  Google had just implemented a new feature.  They had noticed that when making a right turn, humans often pull to the right side of a driving lane to give as much room as possible to people going straight.  This prevents pileups at turns where there isn’t a turn lane.  All well and good so far, and something a lot of us do.  In this case, the AV moved over to the right in preparation, and then its sensors caught sight of an obstruction.  Sandbags around a storm drain at the corner.  The AV is now stopped because traffic is going around it.  There’s a gap; a bus is coming.  The car assumes it has right-of-way because it’s in front of the bus.  The driver – all AVs have to have a human in the loop since even the most optimistic company is full of engineers who know what level of autonomy we’re working on agreed.  There was no distracted humans in this case; the driver saw the bus and reckoned it would slow down to let the AV go around the sandbag.  The bus didn’t and hit the car.  Fault car, but in a very murky situation.

The policy implications and public outcry were minimal, possibly because Google admitted “some responsibility.”  Consumer Watchdog called for there to be a police report every time an AV got into an accident.  They wanted the technical data and videos to be released every time there was an accident, and regulations requiring human drivers behind the wheel.  All reasonable demands from a regulatory standpoint, and the industry would have been wise to accede.

To use this as a segue into a quick point before going on to example 2, this is an example of how a company does a good job in PR, but the industry falls down.  Regulation isn’t just about protecting citizens (although that’s the primary goal) from companies.  It’s about protecting the companies from their own mistakes.  Standards and Regulations are vitally important to build alongside technology since they give industry and engineers something to refer to as a precaution and in retrospect.  Take the request above: Release of data about crash and a driver behind the wheel.  Both good ideas which most companies voluntarily already do.  Problem: if it’s not required, what if a company doesn’t have a human driver to supervise and intercede during road testing?  The current technological capabilities make this seem highly unlikely, but it only takes one bad actor.

An axiom of regulation is that proactive regulation is always less onerous than reactive regulation.  When a chemical plant might have a spill, they can put in standards that increase reliability and decrease risk, and lobby for regulations that work with the standards to show that if something does happen, there’s a process and the company can say that they complied with this and that regulation and standard.  If there is none, then the public calls their representative and the government makes sure to put the strongest possible regulatory burden they can in place to make the public see that they’re doing something.  Contrast the fracking industry of New York which tried to fight regulation and is effectively banned statewide with that of California where they wrote their own regulation which was tough but not only insulated them by showing that they were complying with tough regulation but also reduced competition by making it more expensive to enter the market.  Buying all that safety gear is a barrier, but in this case a necessary one.

There are regulations in play – and some of the State regs are why Uber is testing in Arizona and Pennsylvania instead of California.  Without a coherent policy on a national level, however, the industry remains vulnerable.  The National Highway Traffic Safety Administration (NHTSA) has guidelines and technical help for States, but there is no backing in law for it.  Bills in the House and Senate are stalled and unlikely to pass soon.   Standards exist, but remain in the early phase.  As long as there are no comprehensive and/or binding rules, there is little protection should the public turn on the AV industry, and there is little that industry or regulators could do proactively if an AV company in a State with minimal regulation fields a dangerous vehicle.

Next post, I’ll go on to example 2 – the first Tesla fatality, and onwards to further examples and a discussion of what regulations are currently in place, regulations being proposed, and how they all are affected by technical progress and capabilities.

 

Obligatory Introductions

Introductory blog post, here.  Blah blah blah and all that.  I need to keep the saw sharpened and I have what I think is a unique perspective.  If you’re here, you’ve probably seen the portfolio, but let’s do introductions and then we can get on with a really fun second post – jumping into the deep end with autonomous car fatalities and how each one matters to the industry and society.

I would like to begin by stating why I haven’t done this before.  It’s not because I can’t, but because I always felt like I wasn’t the best qualified to speak.  Well, it’s time I got over that.  I’m a Master of Science in Robotic System Development, a past and future government employee, an amateur political analyst, and a newbie entrepreneur with a business degree on his Bachelor’s of Engineering.  I’m sure there are many analysts out there but there’s only one other autonomous robotic systems professional I know personally who is so deeply interested in the policy and industry of robotics, and that’s Professor Bourne of my alma mater, Carnegie Mellon.  He tends to be more cautious about offering opinions on technology, while I’m usually more circumspect in my political analysis.  It’s a difference of approach that’s probably down to our backgrounds.  I’ve worked in the government in the midwest where the prevailing opinions of my co-workers were quite different from my own, and we were all subject to the Hatch Act, so we dealt in issues rather than personalities.  I rarely meet a policy wonk who cares to learn how autonomous systems really work, and on the flipside, it’s vanishingly rare to meet a roboticist (especially a robotics STUDENT) who cares much about policy unless it hits them directly (immigration policy is an especially looming issue when more than half your Master’s class is from overseas).

So great.  I’ve got experience in several fields that touch each other, been around the block once as in the government, and I have an affinity both for the tech side and the policy side.  That’s what this blog will mostly be about.  I’ll be trying to explain why the tech is important to the policy people and how policy impacts the tech people.  I plan to find a level of abstraction where the two can meet with as much understanding as possible.  I’ll try to cite a few sources as I go, and – weather permitting – put out a piece regularly.  I may – as I go – do a post occasionally that isn’t in the core theme of tech+policy news and how one impacts the other, but that’s where this will mostly be going.

Illumination!

PMC