Imagine you are driving down a two-lane road at about 45 miles per hour, cruising home. You see a group of kids walking home from school about 100 yards ahead. Just as you’re about to pass by them, an oncoming 18-wheeler swerves out of its lane and is about to hit you head on. You have seconds, tops, to decide: Sacrifice yourself, or hit the children so you can avoid the truck.
I like to think that, if asked in advance, most people would choose not to plough into the kids. As the automation of driving advances, there’s a way to “hard-code” that decision into vehicles. Many cars already detect whether a toddler in a driveway is about to be run over by a driver with a blind spot. They even beep when other vehicles are in danger of being bumped. Transitioning from an alert system to a hard-wired hard stop is technically possible. And if that’s possible, so is an automatic brake that would prevent a driver from swerving to save herself at the expense of many others.
But the decision can also be coded the other way—to put the car occupants’ interests above all others. Christoph von Hugo, Mercedes’ manager of driver assistance systems, active safety, and ratings, appeared to push this vision of the future of more fully autonomous vehicles in a recent article in Car and Driver. “You could sacrifice the car, but then the people you’ve saved, you don’t know what happens to them after that in situations that are often very complex, so you save the ones you know you can save,” he said. “If you know you can save at least one person, at least save that one. Save the one in the car.” (Mercedes has since said that Hugo was “quoted incorrectly” and that “[f]or Daimler it is clear that neither programmers nor automated systems are entitled to weigh the value of human lives. Our development work focuses on completely avoiding dilemma situation by, for example, implementing a risk-avoiding operating strategy in our vehicles.”)