Submitting this abstract for an invited talk.
The Laboratory for Physical Sciences has recently been conducting research in ML model uncertainty and confidence, detecting out-of-distribution data, and detecting concept drift. As we deploy ML models into operations, we must be constantly assessing whether the models are still effective and performing as expected in the current data environment. This is relevant in all cases, but especially critical in cybersecurity applications, because the data, technology, actors and behaviors are all evolving so rapidly. This talk will review several algorithmic techniques developed to address this problem.
|