Presentation
30 May 2022 When to retrain? assessing production ML model performance using uncertainty, out-of-distribution detection, and concept drift detection
Author Affiliations +
Abstract
Submitting this abstract for an invited talk. The Laboratory for Physical Sciences has recently been conducting research in ML model uncertainty and confidence, detecting out-of-distribution data, and detecting concept drift. As we deploy ML models into operations, we must be constantly assessing whether the models are still effective and performing as expected in the current data environment. This is relevant in all cases, but especially critical in cybersecurity applications, because the data, technology, actors and behaviors are all evolving so rapidly. This talk will review several algorithmic techniques developed to address this problem.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
James Holt "When to retrain? assessing production ML model performance using uncertainty, out-of-distribution detection, and concept drift detection", Proc. SPIE PC12117, Disruptive Technologies in Information Sciences VI, PC1211702 (30 May 2022); https://doi.org/10.1117/12.2618321
Advertisement
Advertisement
KEYWORDS
Data modeling

Performance modeling

Algorithm development

Physical sciences

Back to Top