Gerald Sussman, Hal Abelson, Lalana Kagal, Daniel Weitzner:

When humans and autonomous systems share control of a vehicle, there will be some explaining to do. When the autonomous system takes over suddenly, the driver will ask why. When an accident happens in a car that is co-driven by a person and a machine, police officials, insurance companies, and the people who are harmed will want to know who or what is accountable for the accident. Control systems in the vehicle should be able to give an accurate unambiguous accounting of the events. Explanations will have to be simple enough for users to understand even when subject to cognitive distractions. At the same time, given the need for legal accountability and technical integrity these systems will have to support their basic explanations with rigorous and reliable detail. In the case of hybrid human-machine systems, we will want to know how the human and mechanical parts contributed to final results such as accidents or other unwanted behaviors.
 
 The ability to provide coherent explanations of complex behavior is also important in the design and debugging of such systems, and it is essential to give us all confidence in the competence and integrity of our automatic helpers. But the mechanisms of merging measurements with qualitative models can also enable more sophisticated control strategies than are currently feasible.
 
 Our research explores the development of methodology and supporting technology for combining qualitative and semi-quantitative models with measured data to produce concise, understandable symbolic explanations of actions that are taken by a system that is built out of many, possibly autonomous, parts (including the human operator).
 
 We have currently developed a two-step process to explain what happened and why those events happened in a particular vehicle CAN bus log. In the first step, we take a CAN bus log as input to begin an analysis of what happened during a particular car trip. This analysis includes smoothing noisy data, performing edge detection to find out when particular events occurred (e.g. when did the operator apply the brakes), and interval analysis to see how particular intervals relate to each other (e.g. did the car slow after the brakes were applied?). Using this analysis, we were able to construct a story of what happened in a particular car trip and detect particular events of interest (e.g abrupt changes in speed and braking, and dangerous maneuvers like skids).
 
 In the second step, we take a particular event of interest (that was identified in step 1) and explain why it happened. We have developed three different models to explain vehicle physics in a human readable form. We have constructed a model of the car internals, which explains the process by which individual components of the car affect other components. We have also constructed a purely qualitative physical model of the car, which explains vehicle actions using qualitative terms like increasing, decreasing, no change, and unknown change. While this model is easy for humans to understand, it lacks the level of detail needed to explain more sophisticated actions like skids. So we have also developed a semi-qualitative model of car physics using geometry. This model infers the overall effect on the normal forces and frictional forces on the wheels from the reported lateral and longitudinal acceleration during a particular interval. Then, these effects and their consequences are explained qualitatively to the user.