In the last section of this series, I looked at security robots that might be allowed to physically restrain and assault humans. This crossed the boundary of personal space and personal safety in the name of enforcing the rules of a facility. Today, I’ll look at a different boundary: property. It seems a given that robots shouldn’t commit theft, and yet there are many cases in which they might. Whether by malicious use, poor programming, or misunderstood conditions, a robot could steal anything from fruit to itself – assuming a robot can steal itself.
The situation obvious both for its inevitability and the result is someone tells a robot ‘fetch that’ and indicates something that doesn’t belong to them. That’s not much different from using a long pole to snag the same item and will no doubt be treated as such legally – though of course the news will glom on because of the novelty, much as if someone had trained their dog to shoplift. Upping the stakes, let’s say the robot in question is built along the same lines as an autonomous security robot. We saw that – in simulation – one of the functions of these robots is to confiscate dangerous or stolen property. Let’s say someone tells this robot ‘that object is stolen, retrieve it’ when in fact the object is the lawful property of the person who has it and the person commanding the robot wishes to steal it. The robot will at first demand the object and when it isn’t forthcoming, escalate per its table of prescribed behaviors until it uses force. Now, it may be predictable that this occurs, so the robot’s operator is still responsible for everything it does. I think it can be assumed that any unlawful behavior a robot is ordered to commit will be charged to whoever gives it the command, though the manufacturer will no doubt also take a lot of heat until they find a way to make the robot able to distinguish between a lawful and unlawful order.
Moving along to inadvertent theft, then. An agricultural robot is picking fruit. It fails to mark the boundary between one farmer’s field and another, and it picks the neighboring farm’s fruit. The owner is puzzled at how much the robot brings back, but doesn’t connect the dots until after the fruit has gone to market and the neighbor calls the police when they find all their fruit missing. If it’s a particularly intelligent robot (in some ways), it goes as far as to harvest two different kinds of produce, which would alert its owner much sooner. When they either confess or are confronted, they refuse to pay damages because ‘the robot did it.’ A business is responsible for the actions of their employee or equipment while they’re doing their job, so the farmer is forced to pay. However, the farmer turns around and sues the company that made the robot, alleging that the robot was defective. My guess is that the farmer would win that. So far it’s all civil, but the thing is that the robot did commit a crime as well. An employer may be financially on the hook, but they can’t be criminally charged unless they ordered the employee to commit the crime, which didn’t happen in this case. A pet owner can be held liable for the actions of their animal, but this is a purposeful thinking machine. Much like in the case of the guard robot committing assault, there’s no clear answer here. It was a mistake/malfunction, but ignorance of the law is often dismissed as a mitigating factor.
When can a robot be considered responsible for its own actions?
Up until now, machines are seen to malfunction, be misused, or be defective. The maker of the machine is sometimes at fault, and sometimes the owner. We’re swiftly reaching a point, however, where robots will have a form of intelligence which is opaque to us in much the same way our own intelligence is. We will be able to make educated inferences based on behavior, we’ll run tests, but we won’t look into the robot’s mind the way we can other programmed machines of today. Papers are being written on how to reverse engineer neural networks, and for the most part we can still say ‘this algorithm is doing this with this training set.’ It’s going to get harder as AIs become more complex and eventually we’ll reach a point where our AI systems become irreversibly black-box. There are already articles out there that suggest that some of the algorithms that live entirely in software have already reached this point, citing situations like the algorithms that perform high frequency trading. If all we can do is identify a motivation and are unable to map out the programmatic sequence that led to a robot taking an action, then the robot may as well be alive.
I’m drifting somewhat from the topic, but that’s what any personal, legal, or moral boundary transgression comes down to. Was it intentional and can the transgressor be considered able to understand the concept of transgression? A cat understands that we don’t want them to knock the glass off the table, but they do it anyway. The concept of negative reinforcement to a robot is a fascinating one and something I may revisit in another article on the crossing of a different boundary – personhood.
Bonus example: Let’s say your autonomous car drives off without you and crosses State lines with your recently purchased liquor in the trunk – which is illegal if it goes to, for example, Pennsylvania. Did your car steal itself and the booze, and did it transport alcohol without a permit or are you liable for the misbehavior of your vehicle?
Science fiction concept: Picture if you will a fully automated still where robots gather the ingredients from a small orchard and vegetable garden, put them in the fermenter, and gather up the bottles. Maybe the entire concern is extremely green and is powered using byproducts of the fermentation so there is no artificial power source – just sunlight and rain. Prohibition is reinstated and the still is shut down. A new owner takes over and at some point the system reboots without them knowing about it. Is it bootlegging if the liquor literally makes itself?