We propose an unsupervised method to detect lung lesions on FDG-PET/CT images based on deep image anomaly detection using 2.5-dimensional (2.5D) image processing. This 2.5D processing is applied to preprocessed FDG-PET/CT images without image patterns other than lung fields. It enhances lung lesions by parallel analysis of axial, coronal, and sagittal FDG-PET/CT slice images using multiple 2D U-Net. All the U-Nets are pretrained by 95 cases of normal FDG-PET/CT images having no lung lesions and used to transform CT slice images to normal FDG-PET slice images without any lesion-like SUV patterns. A lesion-enhanced image is obtained by merging subtractions of the transformed three normal FDG-PET images from the input FDG-PET image. Lesion detection is performed by simple binarization of the lesion-enhanced images. The threshold value varies from the case and is the 30-percentile voxel value of the target lesion-enhanced image. In each extracted region, the average of the intra-regional voxel values of the enhanced image is computed and assigned as a lesion-like score. We evaluated the proposed method by 27 patients FDG-PET/CT images with 41 lung lesions. The proposed method achieved 82.9 % of lesion detection sensitivity with five false positives per case. The result was significantly superior to the detection performance of FDG-PET image thresholding and indicates that the proposed method may be helpful for effective lung lesion detection. Future works include expanding the detectable range of lesions to outside lungs, such as mediastinum and axillae.
Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.
In individualized screening mammography, a breast density is important to predict potential risks of breast cancer incidence and missing lesions in mammographic diagnosis. Segmentation of the mammary gland region is required when focusing on missing lesions. A deep-learning method was recently developed to segment the mammary gland region. A large amount of ground truth (prepared by mammary experts) is required for highly accurate deep-learning practice; however, this work is time- and labor-intensive. To streamline the ground truth in deep learning, we investigated a difference in acquired mammary gland regions among multiple radiological technologists having various experience and reading levels, who shared the criteria on segmentation. If we can ignore a skill level for image reading, we can increase a number of training images. Three certified radiological technologists segmented the mammary gland region in 195 mammograms. The degree of coincidence among them was assessed with respect to seven factors which indicated the feature of segmented regions including the breast density and mean glandular dose, using Student’s t-test and Bland-Altman analysis. The assessments made by the three radiological technologists were consistent considering all factors, except the mean pixel value. Thus, we concluded that the ground truths prepared by multiple practitioners with different experiences can be accepted for the segmentation of the mammary gland region and they are applicable for training images if they stringently share the criteria on the segmentation.
We have developed a near-infrared hyperspectral imaging system that can acquire both spectral and spatial data covering
a 50-degree field at the fundus surface within 5 seconds. Single wavelength band reflectance images with bandwidth of
20 nm have demonstrated that choroidal vascular patterns can be clearly observed as bright images for the central
wavelength ranging from 740 to 860 nm, while retinal blood vessels are seen as dark images for that ranging from 740 to
920 nm. It is desirable for clinical use to separate the choroidal vascular patterns image from the retinal blood vessels
image. To this end, we have applied the decorrelation stretch to processing of spectral images. We have found the
following. Original fundus spectral images have stripes noise. The decorrelation stretch emphasizes the noise and, thus,
the noise has to be removed by, for example, DCT (Discrete Cosine Transform) filter beforehand. The choroidal vascular
image can be successfully separated from the retinal vascular image. Furthermore, the macular is superimposed on the
latter as it should be so from the viewpoint of anatomy. The result suggests that useful information may be extracted by
combining hyperspectral images with the decorrelation stretch.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.