To print a PDF copy of this article, click here.
Shelley M. Cazares
The Department of Defense and Department of Homeland Security use many threat detection systems, such as air cargo screeners and counter-improvised-explosive-device systems. Threat detection systems that perform well during testing are not always well received by the system operators, however. Some systems may frequently “cry wolf,” generating false alarms when true threats are not present. As a result, operators lose faith in the systems—ignoring them or even turning them off and taking the chance that a true threat will not appear. This article reviews statistical concepts to reconcile the performance metrics that summarize a developer’s view of a system during testing with the metrics that describe an operator’s view of the system during real-world missions. Program managers can still make use of systems that “cry wolf” by arranging them into a tiered system that, overall, exhibits better performance than each individual system alone. Continue reading