Research on Computer-Aided Diagnosis (CAD), which discriminates the presence or absence of diseases by machine learning and supports doctors’ diagnosis, has been actively conducted. However, training of machine learning requires many training data with annotations. Since the annotations are done by radiologists manually, annotating hundreds to thousands of images is very hard work. This study proposes classifiers using convolutional neural network (CNN) with transfer learning for efficient opacity classification of diffuse lung diseases, and the effects of transfer learning are analyzed under various conditions. In detail, classifiers with nine different conditions of transfer learning and without transfer learning are compared to show the best conditions.
Research on computer-aided diagnosis (CAD) for medical images using machine learning has been actively conducted. However, machine learning, especially deep learning, requires a large number of training data with annotations. Deep learning often requires thousands of training data, but it is tough work for radiologists to give normal and abnormal labels to many images. In this research, aiming the efficient opacity annotation of diffuse lung diseases, unsupervised and semi-supervised opacity annotation algorithms are introduced. Unsupervised learning makes clusters of opacities based on the features of the images without using any opacity information, and semi-supervised learning efficiently uses the small number of training data with annotation for training classifiers. The performance evaluation is carried out by the classification of six kinds of opacities of diffuse lung diseases: consolidation, ground-glass opacity, honeycombing, emphysema, nodular and normal, and the effectiveness of the methods is clarified.
This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.