In a not too distant future, you may be riding in an autonomous vehicle that is forced to decide between running over a pedestrian or a bicyclist. Or whether to crash into a tree or another automobile. It may make a decision you are not particularly happy with. These decisions will rely on “artificial intelligence” built into these cars. These decisions are being programmed right now by developers of autonomous vehicles. Want to know under what conditions you might be converted into road kill? Read on.
In Isaac Asimov’s science fiction book “I, Robot” (1950), he introduced the Three Laws of Robotics. The three laws are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
These are a pretty good start, but we all known how it turned out for Will Smith on the recent movie version. We also know how it turned out for 49 year old Elaine Herzberg, who has the unfortunate distinction of being the first pedestrian killed by an autonomous vehicle.
The German government has already developed 20 rules for autonomous vehicles. Here in the US at MIT, the Moral Machine experiment explored what actual humans would choose under certain circumstances, and these chooses are being used to develop rules for cars. You can still click through and participate with the study if you wish. I clicked over to the survey, and I found the selections were not easy or particularly attractive. What we are learning, though, is that different cultures have different values when it comes to saving human lives. So coming up with a “universal” solution will be difficult, but these issues and many others will need to be resolved before autonomous travel can enter the mainstream.Share