Many studies have assessed breast density in clinical practice. However, calculation of breast density requires segmentation of the mammary gland region, and deep learning has only recently been applied. Thus, the robustness of the deep learning model for different image processing types has not yet been reported. We investigated the accuracy of segmentation of the U-net for mammograms made with variousimage processing types. We used 478 mediolateral oblique view mammograms. The mammograms were divided into 390 training images and 88 testing images. The ground truth of the mammary gland region made by mammary experts was used for the training and testing datasets. Four types of image processing (Types 1–4) were applied to the testing images to compare breast density in the segmented mammary gland regions with that of ground truths. The shape agreement between ground truth and the segmented mammary gland region by U-net of Types 1–4 was assessed using the Dice coefficient, and the equivalence or compatibility of breast density with ground truth was assessed by Bland-Altman analysis. The mean Dice coefficients between the ground truth and U-net were 0.952, 0.948, 0.948, and 0.947 for Types 1, 2, 3, and 4, respectively. By Bland-Altman analysis, the equivalence of breast density between ground truth and U-net was confirmed for Types 1 and 2, and compatibility was confirmed for Types 3 and 4. We concluded that the robustness of the U-net for segmenting the mammary gland region was confirmed for different image processing types.
In individualized screening mammography, a breast density is important to predict potential risks of breast cancer incidence and missing lesions in mammographic diagnosis. Segmentation of the mammary gland region is required when focusing on missing lesions. A deep-learning method was recently developed to segment the mammary gland region. A large amount of ground truth (prepared by mammary experts) is required for highly accurate deep-learning practice; however, this work is time- and labor-intensive. To streamline the ground truth in deep learning, we investigated a difference in acquired mammary gland regions among multiple radiological technologists having various experience and reading levels, who shared the criteria on segmentation. If we can ignore a skill level for image reading, we can increase a number of training images. Three certified radiological technologists segmented the mammary gland region in 195 mammograms. The degree of coincidence among them was assessed with respect to seven factors which indicated the feature of segmented regions including the breast density and mean glandular dose, using Student’s t-test and Bland-Altman analysis. The assessments made by the three radiological technologists were consistent considering all factors, except the mean pixel value. Thus, we concluded that the ground truths prepared by multiple practitioners with different experiences can be accepted for the segmentation of the mammary gland region and they are applicable for training images if they stringently share the criteria on the segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.