Autonomous Delivery Crawlers: A Policy Perspective

Much has been written about the possibilities of full sized autonomous vehicles (AVs), including by me. There’s a lot of material covering delivery drones or small unmanned aerial vehicles (sUAVs) – which I haven’t covered here but have spoken about in presentations to engineers at CMU. The thing is that most packages, takeout, or other deliverable items are much too small to waste an entire car, and a delivery drone is very limited by its energy requirements and capacity. AVs are better off for big things and sUAVs for rural and suburban deliveries where the place delivering to or the place the delivery is coming from are relatively distant from a main road.

Let’s say you’re ordering takeout from a restaurant that’s within 5 miles of easily traversed road or sidewalk, or getting a package and the depot is within 10 miles of where you live. An AV is an extravagance and if everyone used sUAVs, the air would be full of them. Additionally, if there were large buildings around and the sUAV traffic is limited to flying between them, the airborne traffic jams could get severe. If you’re using a delivery robot, it’s going to be a crawler. I use the term not because they’re all on treads – most of them are wheeled – but because of the relative speeds and location. It’s a good word to differentiate autonomous delivery robots that are smaller than a car and travel on land from their airborne and human-occupiable counterparts. They’re little critters that run around with pizzas and tacos inside, ferrying small packages from one place to another. Some have a range of 20+ miles and are quite large like the Robby 2 and others are shorter haul robots that can fit a couple pizzas like the Starship.

Images courtesy of Robby and Starship

Starship Technologies has been especially quick off the mark when it comes to regulation. By July of 2017, Starship had succeeded in getting five States to pass legislation allowing delivery crawlers to operate on sidewalks. Not only that, but they took things a step further – getting the legislation to include a maximum weight limit that froze out one of their competitors. Usually an industry has to be much more mature before companies start weaponizing regulation to stop competitors, though it may be a coincidence in this case. Then again, it may not. Starship seems leaps and bounds ahead of anyone else in the media and in the testing sphere, despite not being the first company to test on public sidewalks.

On the flipside, San Francisco almost completely banned crawlers, relegating them to industrial districts for tests only and only going 3 mph on sidewalks at least six feet wide. The regulation also required operators to be within 30 feet of the robot, which all companies already did. This appeared to be spearheaded by Norman Yee, supervisor for SF’s 7th District. Supervisor Yee has had clashes with tech companies in the past and expressed concerns about traffic, job loss, and wealth inequality in regards to autonomous machines. This didn’t stop Starship, which pivoted in SF from transporting from businesses to private homes and offices to doing deliveries in office parks. This includes traveling unsupervised on the 4.3 acre Intuit campus in Mountain View where the robot delivers food and office supplies. Starship had good reason to be unruffled by the ban in one city given how welcoming other States had been, and their reception in DC where they’d been testing since 2017. The delivery robot industry recently got legislation passed in DC to permanently allow them to operate.

In fact, ‘unruffled’ is Starship’s attitude towards everything in the public space. While other companies tend towards quietly testing and making deals with service providers to deliver their food, Starship’s leadership enjoys showing how chill they are about everything. When reports surfaced about people kicking the robot, they responded that if that was how people needed to handle their anger management then fine. The robot was tougher than a foot and if things got really rough, its cameras, siren, and GPS meant that no one was getting away with beating up the robot and not paying for it.

Though San Francisco’s reaction was precipitous, it may not have been unjustified. The public have shown very positive attitudes towards delivery robots, especially ones that assist human carriers. If we use helper robots instead of fully autonomous delivery crawlers, then that will add to sidewalk congestion for sure. If we go with mostly crawlers, there will still be an uptick in sidewalk usage since many delivery services use bicycles, and there will likely be a general increase in the use of delivery services since they will probably be cheaper and more convenient. It’s a pretty clear case of the Tragedy of the Commons. Absent a clear cut reason not to – like a law or obvious liability issues – the sidewalk is taken up by more people and robots over time and eventually becomes difficult to navigate. At the moment, it seems very likely that humans will win this fight wherever it crops up because sidewalks were obviously meant for pedestrian traffic, and small slow moving robots won’t get priority over that.

The question is: where do the robots go? Like the shared electric scooters now proliferating in cities, delivery robots often go too fast for pedestrian paths (or take up too much sidewalk space), and too slow for car lanes or even bike lanes. Bike lanes themselves have had a very rocky climb towards prevalence in cities, remaining sparse, dangerous, and often nonexistent in many municipalities. The author has lived in two suburbs that didn’t even have sidewalks. How, then, can robots expect to get their own artery when human piloted vehicles are still waiting for their bike and scooter paths?

The answer may lie with another set of robots. The specter of autonomous vehicles looms in almost any discussion of future transit, and here they are again. We’ll assume that roads won’t get wider, nor will people stop walking on sidewalks. Then that means reallocating street space.

The promise – and threat – of AVs is that they will reduce or entirely remove the need for on-street parking since they can drop people off and then go on to their next task or a remote parking location. Let’s assume this is true. In that case all the on street parking will be up for grabs. I expect that the immediate demand will be for bike lanes, but with the loss of revenue, cities may have other ideas. They might require AVs to report how much they drive and then pay a toll based on their city street usage. However, that might not work out to be enough. Cities might also put in a toll for small delivery robots and let them have the onstreet parking lane instead of cyclists and scooters. It’s not likely, but it’s possible in some places. Maybe AVs will somehow reduce traffic so much that multiple lanes will be opened up for bikes and robots. A lot of things can happen.

The truth is that there are many possible scenarios, but no likely ones. The Boring Company wants to put cars on skates underground and skate them from garages to hyperloop arteries. I was going to observe that since robots don’t need air, sunlight, or as much overhead space, they’d go well in the tunnels, which would also help protect them and their cargo from the weather. However, it seems like Elon Musk wants to put us in the tunnels and the robots up on street level. Personally, I think that sounds like we’re training the robots to seek a dystopian future where they get to have parks and we’re stuck living underground, but that’s just me.

I remember a brief span after wheeled bookbags first came out. Everyone started wheeling their bags. Then it turned out that this put way too much extra stuff on the sidewalk, annoyed people, and generally got in the way so much that everyone went back to backpacks on backs. Sidewalks are congested in a lot of places, and unless/until they become quite inexpensive, these robots are going to have trouble operating where businesses can realistically buy one. Giving them their own space when bikes and scooters can’t get it in a lot of places will only increase friction, and the hypothetical space savings from AVs will probably go to bikes and parkland rather than convenience droids. They make a lot of sense for indoor use, but I think the cities – like rural areas – belong to the sUAV. In well run suburbs, there might be a cost-effective use case. Low enough foot traffic in a highly centralized market district could mean delivery robot utilization makes sense. I lived in Bloomington, Indiana for a few years, and given the quality of the food on the main street and the relatively compact residential section with well laid out, underutilized sidewalks, a handful of delivery robots shared by the various restaurants might do well. Not expecting to see them out in Odon or New York except as novelties, though. They’d run out of sidewalk in Odon and be run off the sidewalk in New York. Though New Yorkers might look at the robots that are moving consistently with traffic, and then at the tourists blocking people’s path and decide they like the robot better.

Robots and Boundaries Part 3: Abuse

In the prior two sections, we looked at how robots transgress human boundaries. Today, how about turning that around? It’s a mainstay in human robot interaction (HRI) that robots get bullied. Sometimes it’s a humorous way of testing a robot’s capabilities like with Boston Dynamic’s designated ‘robot bully.’ Sometimes its out in the field as with the famous case of the mall patrol robot which was deployed for an HRI paper on how people treat robots who are acting autonomously. That paper spawned further algorithmic work on finding paths which would avoid bullies. Further studies were conducted on bullying robots, which suggested that robots were bullied because they were human-like or perceived to be affected by bullying. At the risk of rehashing the end of the previous part, this raises the question of whether a robot can be considered to be abused and what the cutoff is for causing physical harm to an entity. We condemn animal abuse because they can feel pain, but like simulated intelligence, when are simulated feelings effectively the same as organic ones?

ATR Intelligent Robotics and Communication Laboratories 

 

For the time being, the better policy question is: should it be – or is it already – an offense to stop a robot from doing their job? Let’s take a delivery crawler, one of those little critters that have taken to roaming the streets of San Francisco bringing people pizzas, or those upright bins that trundle down the hallways of particularly swanky hotels carrying room service and extra pillows. What if someone won’t let the delivery bot go by? Eventually the food will get cold and the robot will be missed – presuming it has no way to alert its owner that something has gone wrong in the interim. I think it can be taken as assumed that stealing from a delivery crawler is theft from whoever bought the contents just like stealing from a delivery truck, so we can skip that. Besides, the only speculation about theft so far has been by media outlets and the people that made the robots. There’s been no known cases of a delivery crawler being robbed by an actual malefactor. In all these cases with the delivery bot, interfering with the robot materially injures both the buyer of the food and the owner of the robot who has probably lost business because the robot was late or the food was missing.

Robots deliver fun with hotel room service orders, and they don't expect a tip

Delivery robot in a hotel (Mel Melcon / Los Angeles Times).  Interesting sidenote, there is a documented case of one of these being delayed by accident when it gets knocked sideways by a passing guest and gets confused.

That’s too obvious. What about slowing it down? Just delay it two minutes. Or to take the example of a different type of robot, what about stopping an assembly robot for a couple minutes, or getting in the way of a robot that patrols building lobbies offering to help visitors find a specific room? I suppose it comes down – for now – to whether the decrease in efficiency can be proven to have cost something. Eventually harassing a robot will probably be some kind of crime given how many will be out in the world doing useful tasks, but for now – especially if the robot is traversing a public space – I’m not aware of whether anything can or will be done to curb such behaviors. For now it’s all down to property damage or some provable loss.

Jump ahead a few years. Delivery crawlers are relatively commonplace, and have been programmed to understand that their cargo is usually less valuable than they are. Thus if they are threatened with the possibility of theft, they will release the hatch rather than be harmed by attempts at forced entry. This adds a shade to the theft concept, I think. At this point the robots may start being taught how much damage they can take from various implements, and they’re more likely to surrender to someone with a weapon. At this point, can we distinguish between armed and unarmed robbery of an autonomous machine? Before, it was clear cut. There is more harm done if the robot was damaged in the theft. Now the threat of violence is weighed into the robot’s behavior, and so that threat may have to be considered as part of the crime. This can loop us back to part 1 of this series. In some municipalities, it’s legal to use force to defend and retrieve property. I will be very interested to see the first case of someone claiming that this right extends to delivery robots, and arms their robots so they can defend themselves against detected attempts at theft.

This series has been a bit more pie in the sky than my usual. Thinking 10+ years ahead rather than 2-3. In my next piece, I’ll get back to the present and near future with a look at delivery crawlers and how they’re being treated by a present day government.

Robots and Boundaries Part 2: Theft

In the last section of this series, I looked at security robots that might be allowed to physically restrain and assault humans. This crossed the boundary of personal space and personal safety in the name of enforcing the rules of a facility. Today, I’ll look at a different boundary: property. It seems a given that robots shouldn’t commit theft, and yet there are many cases in which they might. Whether by malicious use, poor programming, or misunderstood conditions, a robot could steal anything from fruit to itself – assuming a robot can steal itself.

The situation obvious both for its inevitability and the result is someone tells a robot ‘fetch that’ and indicates something that doesn’t belong to them. That’s not much different from using a long pole to snag the same item and will no doubt be treated as such legally – though of course the news will glom on because of the novelty, much as if someone had trained their dog to shoplift. Upping the stakes, let’s say the robot in question is built along the same lines as an autonomous security robot. We saw that – in simulation – one of the functions of these robots is to confiscate dangerous or stolen property. Let’s say someone tells this robot ‘that object is stolen, retrieve it’ when in fact the object is the lawful property of the person who has it and the person commanding the robot wishes to steal it. The robot will at first demand the object and when it isn’t forthcoming, escalate per its table of prescribed behaviors until it uses force. Now, it may be predictable that this occurs, so the robot’s operator is still responsible for everything it does. I think it can be assumed that any unlawful behavior a robot is ordered to commit will be charged to whoever gives it the command, though the manufacturer will no doubt also take a lot of heat until they find a way to make the robot able to distinguish between a lawful and unlawful order.

Moving along to inadvertent theft, then. An agricultural robot is picking fruit. It fails to mark the boundary between one farmer’s field and another, and it picks the neighboring farm’s fruit. The owner is puzzled at how much the robot brings back, but doesn’t connect the dots until after the fruit has gone to market and the neighbor calls the police when they find all their fruit missing. If it’s a particularly intelligent robot (in some ways), it goes as far as to harvest two different kinds of produce, which would alert its owner much sooner. When they either confess or are confronted, they refuse to pay damages because ‘the robot did it.’ A business is responsible for the actions of their employee or equipment while they’re doing their job, so the farmer is forced to pay. However, the farmer turns around and sues the company that made the robot, alleging that the robot was defective. My guess is that the farmer would win that. So far it’s all civil, but the thing is that the robot did commit a crime as well. An employer may be financially on the hook, but they can’t be criminally charged unless they ordered the employee to commit the crime, which didn’t happen in this case. A pet owner can be held liable for the actions of their animal, but this is a purposeful thinking machine. Much like in the case of the guard robot committing assault, there’s no clear answer here. It was a mistake/malfunction, but ignorance of the law is often dismissed as a mitigating factor.

When can a robot be considered responsible for its own actions?

Up until now, machines are seen to malfunction, be misused, or be defective. The maker of the machine is sometimes at fault, and sometimes the owner. We’re swiftly reaching a point, however, where robots will have a form of intelligence which is opaque to us in much the same way our own intelligence is. We will be able to make educated inferences based on behavior, we’ll run tests, but we won’t look into the robot’s mind the way we can other programmed machines of today. Papers are being written on how to reverse engineer neural networks, and for the most part we can still say ‘this algorithm is doing this with this training set.’ It’s going to get harder as AIs become more complex and eventually we’ll reach a point where our AI systems become irreversibly black-box. There are already articles out there that suggest that some of the algorithms that live entirely in software have already reached this point, citing situations like the algorithms that perform high frequency trading. If all we can do is identify a motivation and are unable to map out the programmatic sequence that led to a robot taking an action, then the robot may as well be alive.

I’m drifting somewhat from the topic, but that’s what any personal, legal, or moral boundary transgression comes down to. Was it intentional and can the transgressor be considered able to understand the concept of transgression? A cat understands that we don’t want them to knock the glass off the table, but they do it anyway. The concept of negative reinforcement to a robot is a fascinating one and something I may revisit in another article on the crossing of a different boundary – personhood.

Bonus example: Let’s say your autonomous car drives off without you and crosses State lines with your recently purchased liquor in the trunk – which is illegal if it goes to, for example, Pennsylvania. Did your car steal itself and the booze, and did it transport alcohol without a permit or are you liable for the misbehavior of your vehicle?

Science fiction concept: Picture if you will a fully automated still where robots gather the ingredients from a small orchard and vegetable garden, put them in the fermenter, and gather up the bottles. Maybe the entire concern is extremely green and is powered using byproducts of the fermentation so there is no artificial power source – just sunlight and rain. Prohibition is reinstated and the still is shut down. A new owner takes over and at some point the system reboots without them knowing about it. Is it bootlegging if the liquor literally makes itself?