Robots and Boundaries Part 1: Security

Most of my posts up until now have been about autonomous vehicles (AVs) because they have been at the front of the news, and thus in the minds of regulators. However, there are a variety of other autonomous systems operating out there, and some of them are doing things that will almost certainly require more oversight than they currently have.

I’m going to focus in this series of articles on how a robot can cross a legal or ethical boundary, whether by accident or by design. Let’s start with a type of robot that was in the news last December – security robots.

Last December, the news broke that a Knightscope security robot was being rented and used by the SPCA in San Francisco to deter homeless people from settling in the area. Knightscope disputed the allegation, claiming that their robot was only being used to help secure the SPCA employees and environment. This didn’t prevent the SF government from threatening a $1000 per day fine if the robot continued to operate on the sidewalk without a permit, nor did it stop headlines like “Robots Are Being Used Against San Francisco’s Homeless.” The vox pop comments on the issue were very strongly against the use of a robot to chase away homeless people even if it helped the employees feel safer and lowered the rate of car break-ins and vandalism. With the backlash growing, and the robot being attacked by people on the street, the SPCA decided to return their K5 unit.

More thoughtful articles about the ethics of surveillance and security robots followed in the wake of the events. Their points are well taken. The Knightscope robot was unarmed and unable to take positive action. It wasn’t even allowed to get closer than a few feet without triggering proximity alerts which made it try to avoid collision. So then, why was it such a controversial tool if all it did was trundle around sending video back to the security people in SPCA?

Well, for one thing it had recognition algorithms. The main use case presented by Knightscope is that it’s supposed to be used to detect whether cars have been on the property for too long, or to compare against a list of people either allowed or barred from being on the property being patrolled. This means that it was ‘remembering’ who it saw wherever it went, creating a database of individuals doing business with, passing by, or otherwise using the space around the SPCA building. Creating a visual library of who goes where is always a tricky thing to do, especially when the SPCA said they were sharing their data with the police department – albeit only when a crime was committed.

Second, it was a symbol of surveillance. Similar to having a marked car patrolling the street or a camera on every corner, these robots were flat-out showing that everyone and everything near the building was being watched. It wasn’t necessarily intimidating just because it was a robot – although its size and shape have been compared to a smartphone crossed with a Dalek – but instead as a reminder that someone was watching. Not just watching but recording and taking notes.

macm0402

Image Courtesy of Knightscope Gallery

Ultimately, this situation was less about the robot and more about the people. Example: The thing about some of the algorithms onboard the robot is that they’re also supposed to be used to ‘predict’ when crimes will occur. There’s been a wealth of studies and literature about how criminal justice AIs are biased. The bias is because they have to be trained by humans or use human data and so they form the same biases the humans do. Thus a criminal justice AI will be about as accurate as a human doing the same work because the conclusions programmed into it by humans as a basis for its decision making lead it to come to similar conclusions in the future. This has been seen in the use of AIs to predict recidivism in recently released prisoners. If you tell the AI to prevent recidivism, you increase the number of wrong decisions because it will bias in favor of recommending innocent people who have been convicted in the past be placed on stricter parole. Tell it to decrease false positives and you’ll get more people slip through the net and re-offend. These robots and AIs, then, just expand an existing system flaws and all.

What the examples show us is that people don’t trust robots to enforce the law and provide security in public spaces. It makes them feel watched, untrusted, and afraid of what the robot might do. The flipside is that the robots patrolling private spaces seem to be better regarded. Possibly because they expect more surveillance indoors and possibly because the robots seem to be under control if they’re not roaming the streets. One Knightscope robot that fell in a fountain even got a memorial put up by the people who worked nearby.

Let’s take this a step further. Robots that actively enforce rather than watch and catalog. A study is ongoing which puts people in situations where an armed robot asks them to hand over an object like a weapon in order to help maintain peace in a simulated environment. The study so far has shown that the more human-like the robot, the more likely someone will comply with the robot, which suggests to me that people don’t trust robots that remind them that they’re robots.

Looking at an existing robot, there’s the Chinese AnBot, a robot similar to those fielded by Knightscope. It has the ability to identify fugitives, decide where to patrol using data gathered during prior patrols, a button people can press to signal an emergency, and is armed. Its weapon is an electroshock probe used in riot control and currently requires a human in the loop to authorize its use. Still, it’s only a minor software change away from being able to enforce laws autonomously.

So far the security robots haven’t crossed any major boundaries, and when they crossed the minor boundary of being too intrusive they were sharply curtailed.  They’ve acted as extensions of a human security guard and fulfilled the same function that a multiplicity of highly visible cameras could.  One that’s armed and programmed to use force when its internal algorithm predicts it needs to crosses several. Ignoring the ethics of a ‘justice machine’, there’s the big question of legality. In many places, security guards are required to pass certifications and be licensed to use force – and strictly on the premises they’ve been hired to guard.  Restricting ourselves to municipalities where security guards are authorized to use force – since if a human can’t, we can assume a machine can’t either – it’s still a big question. The first robot that physically assaults someone with the intent of upholding civil security will face immense scrutiny. What does it count as?

I should note that I’m relying mostly on US case law because it’s what I’m most familiar with.  Countries with significantly different cultures may have different mores and legal structures.  Still, I think that we can consider the USA to be middle of the road or even somewhat favorable when it comes to methods by which private property is defended.

The Supreme Court of California has ruled against a ‘a device that is without discretion’ in home defense.  A section of their ruling is

“Allowing persons, at their own risk, to employ deadly mechanical devices imperils the lives of children, firemen and policemen acting within the scope of their employment, and others. Where the actor is present, there is always the possibility he will realize that deadly force is not necessary, but deadly mechanical devices are without mercy or discretion. Such devices ‘are silent instrumentalities of death. They deal death and destruction to the innocent as well as the criminal intruder without the slightest warning. The taking of human life (or infliction of great bodily injury) by such means is brutally savage and inhuman.”

But these robots do have discretion.  Their algorithms are sophisticated enough to tell one human from another and to know who is a visitor and who has unrestricted access to a property.  So an autonomous guard is not a booby trap.

Is the robot equivalent to a guard dog?  The robot may use force, it can distinguish between individuals, though arguably it’s better at recognizing specific people and acting predictably in repeated scenarios.  In the UK, a guard dog must be tied up or under control of a handler and the US has a different law in every State.  The general expectation, however, is that dogs can attack trespassers but the trespasser may be able to recover damages depending on the municipality and whether there were signs posted that the dog was there.  I don’t think a robot is analogous enough to a dog since a robot is capable of clearer communication and more predictable behavior.

So the answer is what you might expect: a robot is a robot.  It might sometimes seem like a booby trap since it’s a mechanical contrivance and sometimes like a guard dog because it roams around and can take independent action.  It’s something unique, however, because of the form its intelligence takes.  It certainly isn’t a human security guard because it lacks flexible discretion.  It does resemble all three, however, in that it only operates in a set space and (presumably) cannot exercise its authority outside of the premises it guards.  Even without specific laws regarding security robots, the case of the SPCA robot clearly shows that governments won’t allow robots to leave their designated zones.

A situation: Someone is in a place of business and is looking at a sign on the wall. They’re holding a water bottle in one hand and feeling a cramp they stretch their arm. A security robot sees this and correlates the action and object with previous situations in which vandals spray painted the wall. It doesn’t contextualize, just sees a human with a cylindrical object reaching up while looking at a wall. It goes over and tells the human to move along. The human is confused and says they’re there for legitimate business. The robot is no longer paying attention to the object, just the human not following its command. A human security guard would be contextually aware even while approaching and would have realized by now that the human visitor wasn’t doing anything wrong. The robot follows a pre-planned script and threatens force. The human might leave, angry and intending to complain or withdraw their business. Or they might stand fast and the robot – still following the script – threatens force and then uses it, pushing the human or using some other means of restraint or forced removal.

It’s a simplified case, but a possible one. The robot would only be looking for behaviors that correlate with criminal activity. In this case it notes pose, object in hand, location, and that the human is not on its list of permanent building occupants. The robot wrongly concludes criminal intent and goes through a pre-planned list of actions based on the human’s responses. Who, then, is liable? The manufacturer, the person who trained the robot, the owner, or someone else? The robot has assaulted someone without reasonable cause. It’s crossed a legal and social boundary based on a mis-classification in its algorithm-based decision tree. A human security guard would be fired or disciplined, and the company might have to pay damages. Would the robot face the same or just be reprogrammed? A robot cannot be penalized the same way a human can. In the case of a human guard, punishment is shared between the guard and company. Is the guard’s portion ignored or does the company face greater punishment since the offender is company owned equipment?

It is likely that many governments will simply ban security robots from being allowed to use force as soon as the issue comes up. There’s little demonstrated need, and legislation of what constitutes a situation in which force is allowed looks mechanistic on paper but often relies on human ‘sense’ to prevail. A robot would almost certainly be programmed to do precisely what is legal, and that often fails to be humane. Additionally, while a robot might technically be ‘well trained’ from a certification standpoint (and once one robot can demonstrate all the training material, they all can), and even if they could demonstrate that they would only use force when appropriate, that probably won’t be enough. What we saw in the above examples made it clear that robots in the real world aren’t trusted to patrol and deter crime. Governments won’t be willing to allow armed robots in civil situations if there’s no demonstrated need and a high chance of public outcry. Unlike AVs, the public doesn’t want a robot that tells them what to do and where they can go.

In the public consciousness, robots exist to serve. When they tell us what to do, we get annoyed and when they use force to make us comply then we’re angry and afraid. Our stories are filled with examples of the creation destroying or enslaving its creator, and that’s what most people will think of when they see an automaton enforcing the law. They’ll feel as if they are being dominated by the robot even if that robot is doing exactly what another human has instructed them to. Until people are used to seeing passive security robots, they’re unlikely to be willing to accept active ones. Even then, the boundary being crossed – that of a non-human having physical authority over a human – may engender resentment even after we’ve gotten used to seeing the surveillance bots patrolling our buildings and streets.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s