Silly question: what do we do when AI breaks the law?
Is it as simple as finding the human user who prompted it in a criminal direction? What if there is no discernible human cause? How does the legal system prepare for the possibility of criminal defendants who are not, by definition, human?
There are a few easy answers immediately. But once you peel back the onion, you discover those easy answers are just as easy to pick apart. Take, for example, a self-driving car. This car has one occupant, seated in the passenger seat. It is being controlled by an Artificial Intelligence model capable of performing complex calculations in space and time five thousand times faster than a human being. This AI model was created, trained, and employed by humans. But it is not human. It is more comparable to a service dog: capable, impressive, but still ultimately limited by the mysteries of cogitation and self-awareness.
Now imagine the car suddenly hops the curb and plows into a schoolyard.
The investigation reveals the cause to be related to an AI decision. For lack of a more profound analogy, let’s say that what should have been a one was interpreted as a zero. A ‘yes’ that was supposed to be a ‘no’. Computer error. A glitch. An accident of programming.
Who would stand trial? Who would the families of the victims face with their eyes full of justice, and hearts full of vengeance? Would it be the human occupant? The owner of the vehicle? The person who signed the safety certificate? The company that built the car? The company that created the AI? They all have one thing in common: someone else they can point to. The blame can travel around in circles and never find a home.
Or do we start building closed-circuit prisons for rogue AI?
