Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.
In individualized screening mammography, a breast density is important to predict potential risks of breast cancer incidence and missing lesions in mammographic diagnosis. Segmentation of the mammary gland region is required when focusing on missing lesions. A deep-learning method was recently developed to segment the mammary gland region. A large amount of ground truth (prepared by mammary experts) is required for highly accurate deep-learning practice; however, this work is time- and labor-intensive. To streamline the ground truth in deep learning, we investigated a difference in acquired mammary gland regions among multiple radiological technologists having various experience and reading levels, who shared the criteria on segmentation. If we can ignore a skill level for image reading, we can increase a number of training images. Three certified radiological technologists segmented the mammary gland region in 195 mammograms. The degree of coincidence among them was assessed with respect to seven factors which indicated the feature of segmented regions including the breast density and mean glandular dose, using Student’s t-test and Bland-Altman analysis. The assessments made by the three radiological technologists were consistent considering all factors, except the mean pixel value. Thus, we concluded that the ground truths prepared by multiple practitioners with different experiences can be accepted for the segmentation of the mammary gland region and they are applicable for training images if they stringently share the criteria on the segmentation.
Recently, dense trajectories [1] have been shown to be a successful video representation for action recognition, and have demonstrated state-of-the-art results with a variety of datasets. However, if we apply these trajectories to gesture recognition, recognizing similar and fine-grained motions is problematic. In this paper, we propose a new method in which dense trajectories are calculated in segmented regions around detected human body parts. Spatial segmentation is achieved by body part detection [2]. Temporal segmentation is performed for a fixed number of video frames. The proposed method removes background video noise and can recognize similar and fine-grained motions. Only a few video datasets are available for gesture classification; therefore, we have constructed a new gesture dataset and evaluated the proposed method using this dataset. The experimental results show that the proposed method outperforms the original dense trajectories.
The high prevalence of cataracts is still a serious public health problem as a leading cause of blindness, especially in
developing countries with limited health facilities. In this paper we propose a new screening method for cataract
diagnosis by easy-to-use and low cost imaging equipment such as commercially available digital cameras. The
difficulties in using this sort of digital camera equipment are seen in the observed images, the quality of which is not
sufficiently controlled; there is no control of illumination, for example. A sign of cataracts is a whitish color in the pupil
which usually is black, but it is difficult to automatically analyze color information under uncontrolled illumination
conditions. To cope with this problem, we analyze specular reflection in the pupil region. When an illumination light
hits the pupil, it makes a specular reflection on the frontal surface of the lens of the pupil area. Also the light goes
through the rear side of the lens and might be reflected again. Specular reflection always appears brighter than the
surrounding area and is also independent of the illumination condition, so this characteristic enables us to screen out
serious cataract robustly by analyzing reflections observed in the eye image. In this paper, we demonstrate the validity
of our method through theoretical discussion and experimental results. By following the simple guidelines shown in this
paper, anyone would be able to screen for cataracts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.