Anomaly detection aims to find different patterns from those seen previously. It is usually regarded as a oneclassification problem where abnormal classes are often scarce or not well defined, while the target class (training objects) are sufficient. Recently, several methods have achieved excellent performance through an auxiliary multi-class task(such as rotation predict) used in self-supervised learning. However, these classification-based approaches which adopt the crossentropy loss have inherent defects in anomaly detection. Specifically, the relative measure of cross-entropy may result that a normal sample suffered from a low score would be misclassified as an abnormality. In order to solve this problem, we propose an Absolute Measurement Anomaly Detection (AMAD), to constrain the distribution of activations of each input in the classification network. In details, this technique encourages the output of the ground truth class to be higher, vice versa for unrelated classes. Furtherly, different from the previous evaluation methods that counting the log-softmax activation of the model as normality score, we ignore the log-softmax since the score would be affected heavily provided that more misclassification occurs. We present experiments both in image datasets(CIFAR-10, Fashion-MNIST) and tabular datasets(KDDCUP et al.), which show that our technique achieves better performance in terms of AUROC and F1 score when compared with previous similar methods.
|