There are many technical and social barriers, objective and subjective, to autonomous systems, but “certifiable trust” has been identified as the biggest challenge. “When we certify avionics, we test every input, every path. When we certify pilots, we decide if they will probably do the right thing, but we do not test every response. It is more about behavior and probability.” For humans, interpersonal trust is based on “information, integrity, intelligence, interaction, intent and intuition,” says Allen, arguing this will be difficult to establish with a machine. “We will need new methods of verification and validation.”
 A recent NASA-commissioned National Research Council report on autonomy research for civil aviation highlighted a cross-cutting challenge to increasing autonomy in aircraft: how to ensure that adaptive systems enhance safety and efficiency. “How do we achieve trust in non-deterministic systems?” asks Yuri Gawdiak, of NASA’s aeronautics strategy, architecture and analysis office. To do so, he says, “Humans are tested every step of the way.”
 “Autonomy is growing with computing power and bringing a whole host of new issues,” says Mike Francis, chief of advanced programs at United Technologies Research Center. As machines begin to make decisions, it shows up the inadequacy of the current regulatory approach. “Certification has its roots about 110 years ago. It is based in physics and derives trust from science. It involves the testing of inputs and outputs and is a pass/fail mentality,” he says.