Samuel English Anthony:

On Feb. 14, a Google self-driving car attempted to pass a municipal bus in Mountain View, California. The bus did not behave as the autonomous car predicted, and the self-driving car crashed into it while attempting to move back into its lane. The Google car was traveling at the stately speed of 2 mph, and there were no injuries. Google released a statement accepting fault and announcing that it was tweaking its software to avoid this type of collision in the future.
 
 There is good reason to believe, though, that tweaks to the software might not be enough. What led the Google car astray was the inability to correctly guess out what the bus driver was thinking and then react to it. Google said in its statement:
 
 Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop. And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time. This type of misunderstanding happens between human drivers on the road every day.
 
 Yes, people sometimes misunderstand one another’s intentions on the road. Still, people have an intuitive fluency with this kind of social negotiation. Self-driving cars lack that fluency, and achieving it will be incredibly difficult.
 
 For the past five years, my collaborators and I in the Vision Sciences Lab at Harvard University have been exploring the differences in capabilities between people and today’s best AIs. My studies have focused on simple tasks, like detecting a face in a still image, where AIs have become reasonably skilled. But I have become increasingly unsettled by the implications of our research for very challenging AI tasks. I am especially concerned by the implications for the extremely challenging task of driving a car. Self-driving cars have enormous promise. The improvements to traffic, safety, and the mobility of the elderly could be dramatic. But no matter how capable the AI, humans just behave differently.