Skin cancer is the most common type of cancer in United States with 9,500 new cases diagnosed daily. It is one of the deadliest forms, however early detection and treatments can lead to recovery. More and more modern medical systems employs deep learning (DL) vision models as an assistive secondary diagnostic tool. This progress is derived from the superior performance by convolutional neural networks (CNNs) across a wide number of medical applications. However, recent discovery has revealed that adding small but faint noises to images can cause these models to make classification errors. These adversarial attacks can undermine defense measures and hamper the operations of deep learning models in real-world settings. The objective of this paper is to explore the effects of image degradation on popular off-the-shelf Deep Learning (DL) vision models. First, the investigation of the effects of adversarial attacks on image classification accuracy, sensitivity, and specificity are evaluated. Then we introduce pepper noise as an adversarial attack, which is an extension of the one-pixel attack on deep learning models. Second, we propose a novel texture descriptor Ordered statistics Local Binary Patterns (OS-LBP) for recognizing potential skin cancer areas. Third, we will demonstrate how our OS-LBP is successful in mitigating some of the effects of image degradations caused by adversarial attacks.
|