We propose an unsupervised method to detect lung lesions on FDG-PET/CT images based on deep image anomaly detection using 2.5-dimensional (2.5D) image processing. This 2.5D processing is applied to preprocessed FDG-PET/CT images without image patterns other than lung fields. It enhances lung lesions by parallel analysis of axial, coronal, and sagittal FDG-PET/CT slice images using multiple 2D U-Net. All the U-Nets are pretrained by 95 cases of normal FDG-PET/CT images having no lung lesions and used to transform CT slice images to normal FDG-PET slice images without any lesion-like SUV patterns. A lesion-enhanced image is obtained by merging subtractions of the transformed three normal FDG-PET images from the input FDG-PET image. Lesion detection is performed by simple binarization of the lesion-enhanced images. The threshold value varies from the case and is the 30-percentile voxel value of the target lesion-enhanced image. In each extracted region, the average of the intra-regional voxel values of the enhanced image is computed and assigned as a lesion-like score. We evaluated the proposed method by 27 patients FDG-PET/CT images with 41 lung lesions. The proposed method achieved 82.9 % of lesion detection sensitivity with five false positives per case. The result was significantly superior to the detection performance of FDG-PET image thresholding and indicates that the proposed method may be helpful for effective lung lesion detection. Future works include expanding the detectable range of lesions to outside lungs, such as mediastinum and axillae.
Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.
In individualized screening mammography, a breast density is important to predict potential risks of breast cancer incidence and missing lesions in mammographic diagnosis. Segmentation of the mammary gland region is required when focusing on missing lesions. A deep-learning method was recently developed to segment the mammary gland region. A large amount of ground truth (prepared by mammary experts) is required for highly accurate deep-learning practice; however, this work is time- and labor-intensive. To streamline the ground truth in deep learning, we investigated a difference in acquired mammary gland regions among multiple radiological technologists having various experience and reading levels, who shared the criteria on segmentation. If we can ignore a skill level for image reading, we can increase a number of training images. Three certified radiological technologists segmented the mammary gland region in 195 mammograms. The degree of coincidence among them was assessed with respect to seven factors which indicated the feature of segmented regions including the breast density and mean glandular dose, using Student’s t-test and Bland-Altman analysis. The assessments made by the three radiological technologists were consistent considering all factors, except the mean pixel value. Thus, we concluded that the ground truths prepared by multiple practitioners with different experiences can be accepted for the segmentation of the mammary gland region and they are applicable for training images if they stringently share the criteria on the segmentation.
We propose an automatic feature generation by deep convolutional autoencoder (deep CAE) without lesion data. The main idea of the proposed method is based on anomaly detection. Deep CAE is trained by only normal volume patches. Trained deep CAE calculates low-dimensional features and reproduction error from 2.5 dimensional (2.5D) volume patch. The proposed method was evaluated experimentally with 150 chest CT cases. By using both previous features and the deep CAE based features, an improved classification performance was obtained; AUC=0.989 and ANODE=0.339.
In this study, we propose a novel method of lung lesion detection in FDG-PET/CT volumes without labeling lesions. In our method, the probability distribution over normal standardized uptake values (SUVs) is estimated from the features extracted from the corresponding volume of interest (VOI) in the CT volume, which include gradient-based and texture-based features. To estimate the distribution, we use Gaussian process regression with an automatic relevance determination kernel, which provides the relevance of feature values to estimation. Our model was trained using FDG-PET/CT volumes of 121 normal cases. In the lesion detection phase, the actual SUV is judged as normal or abnormal by comparison with the estimated SUV distribution. According to the validation using 28 FDG-PET/CT volumes with 34 lung lesions, the sensitivity of the proposed method at 5.0 false positives per case was 81.9%.
The purpose of this study is to evaluate the feasibility of a novel feature generation, which is based on multiple deep neural networks (DNNs) with boosting, for computer-assisted detection (CADe). It is hard and time-consuming to optimize the hyperparameters for DNNs such as stacked denoising autoencoder (SdA). The proposed method allows using SdA based features without the burden of the hyperparameter setting. The proposed method was evaluated by an application for detecting cerebral aneurysms on magnetic resonance angiogram (MRA). A baseline CADe process included four components; scaling, candidate area limitation, candidate detection, and candidate classification. Proposed feature generation method was applied to extract the optimal features for candidate classification. Proposed method only required setting range of the hyperparameters for SdA. The optimal feature set was selected from a large quantity of SdA based features by multiple SdAs, each of which was trained using different hyperparameter set. The feature selection was operated through ada-boost ensemble learning method. Training of the baseline CADe process and proposed feature generation were operated with 200 MRA cases, and the evaluation was performed with 100 MRA cases. Proposed method successfully provided SdA based features just setting the range of some hyperparameters for SdA. The CADe process by using both previous voxel features and SdA based features had the best performance with 0.838 of an area under ROC curve and 0.312 of ANODE score. The results showed that proposed method was effective in the application for detecting cerebral aneurysms on MRA.
The detection of anatomical landmarks (LMs) often plays a key role in medical image analysis. In our previous study,
we reported an automatic LM detection method for CT images. Despite its high detection sensitivity, the distance errors of the detection results for some LMs were relatively large as they sometimes exceeded 10 mm. Naturally, it is desirable to minimize LM detection error, especially when the LM detection results are used in image analysis tasks such as image segmentation. In this study, we introduce a novel method of coarse-to-fine localization to increase accuracy, which refines the LM positions detected by our previous method. The proposed LM localization is performed by both multiscale local image pattern recognition and likelihood estimation from prior knowledge of the spatial distribution of multiple LMs. Classifier ensembles for recognizing local image patterns are trained by the cost-sensitive MadaBoost. The cost of each sample is altered depending on its distance from the ground truth LM position. The spatial LM distribution likelihood, calculated from a statistical model of inter-landmark distances between all LM pairs, is also used in the localization. The evaluation experiment was performed with 15 LMs in 39 CT images. The average distance error of the pre-detected LM position was improved by 2.05 mm by the proposed localization method. The proposed method was shown to be effective for reducing LM detection error.
KEYWORDS: Medical imaging, Principal component analysis, Sensors, Range imaging, Computed tomography, Tissues, Statistical analysis, 3D image processing, Lung, Image processing
Anatomical landmarks are useful as the primitive anatomical knowledge for medical image understanding. In this study,
we construct a unified framework for automated detection of anatomical landmarks distributed within the human body.
Our framework includes the following three elements; (1) initial candidate detection based on a local appearance
matching technique based on appearance models built by PCA and the generative learning, (2) false positive elimination
using classifier ensembles trained by MadaBoost, and (3) final landmark set determination based on a combination
optimization method by Gibbs sampling with a priori knowledge of inter-landmark distances. In evaluation of our
methods with 50 data sets of body trunk CT, the average sensitivity in detecting candidates of 165 landmarks was 0.948
± 0.084 while 55 landmarks were detected with 100 % sensitivity. Initially, the amount of false positives per landmark
was 462.2 ± 865.1 per case on average, then they were reduced to 152.8 ± 363.9 per case by the MadaBoost classifier
ensembles without miss-elimination of the true landmarks. Finally 89.1 % of landmarks were correctly selected by the
final combination optimization. These results showed that our framework is promising for an initial step for the
subsequent anatomical structure recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.