The gearheads in Detroit, Tokyo and Stuttgart have mostly figured out how to build driverless vehicles. Even the Google guys seem to have solved the riddle. Now comes the hard part: deciding whether these machines should have power over who lives or dies in an accident.
The industry is promising a glittering future of autonomous vehicles moving in harmony like schools of fish. That can’t happen, however, until carmakers answer the kinds of thorny philosophical questions explored in science fiction since Isaac Asimov wrote his robot series last century. For example, should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?
Auto executives, finding themselves in unfamiliar territory, have enlisted ethicists and philosophers to help them navigate the shades of gray. Ford, General Motors, Audi, Renault and Toyota are all beating a path to Stanford University’s Center for Automotive Research, which is programming cars to make ethical decisions and see what happens.