KEYWORDS: Image segmentation, Kidney, Magnetic resonance imaging, Education and training, Data modeling, 3D modeling, Signal intensity, Performance modeling, 3D image processing, Data acquisition
Segmentation of the renal parenchyma responsible for a renal function is necessary for surgical planning and decisionmaking of renal partial nephrectomy (RPN) by identifying the correlation between the renal parenchyma volume and renal function after RPN on abdominal magnetic resonance (MR) images without radiation exposure. This paper proposes a cascaded self-adaptive framework that uses local context-aware mix-up regularization on abdominal MR images acquired from multiple devices. The proposed renal parenchyma segmentation network consists of two stages: kidney bounding volume extraction and renal parenchyma segmentation. Before kidney bounding volume extraction, self-adaptive normalization is performed using nnU-Net as the backbone network to reduce differences in signal intensity and pixel spacing among MR images of different intensity ranges acquired from multiple MR devices. In the kidney bounding volume extraction stage, the renal parenchyma area is segmented using 3D U-Net with low-resolution data down-sampled twice from the original to efficiently localize the kidney in the abdomen. Bounding volume is generated to focus on the renal parenchyma area during the renal parenchyma segmentation stage by cropping to the volume-of-interest region using the segmentation results up-sampled to the original resolution. In the segmentation stage, the renal parenchyma is segmented using 3D U-Net with mix-up augmented bounding volume to improve the regularization performance of the model. The average F1-score of our method was 92.27%, which was 3.07%p and 0.32%p higher than the segmentation method using original 3D cascaded nnU-Net and 3D cascaded nnU-Net with kidney bounding volume extraction, respectively.
Accurate segmentation of the pancreas on abdominal CT images is a prerequisite step for understanding the shape of the pancreas in pancreatic cancer diagnosis, surgery, and treatment planning. However, pancreas segmentation is very challenging due to the characteristics of the pancreas about the high within and between patient variability and the poor contrast with surrounding organs. In addition, the uncertain area arising from variability in the location and morphology of the pancreas will lead to over-segmentation or under-segmentation. Therefore, the purpose of this study is to improve the performance of pancreas segmentation by increasing the level of confidence through multi-scale prediction network (MP-Net) for areas with high uncertainty due to the characteristics of the pancreas. First, the pancreas is localized using U-Net based 2D segmentation networks on the three-orthogonal planes and combined through a majority voting. Second, the localized pancreas is segmented using a 2D MP-Net considering pancreatic uncertainty from multi-scale prediction results. The average F1-score, recall, and precision of the proposed pancreas segmentation method were 78.60%, 78.44%, and 79.72%, respectively. Our deep pancreas segmentation can be used to reduce intra- and inter-patient variations for understanding the shape of the pancreas, which is helpful in diagnosing cancer, surgery, and planning treatment.
Segmentation of renal parenchyma responsible for renal function is necessary to evaluate contralateral renal hypertrophy and to predict renal function after renal partial nephrectomy (RPN). Although most studies have segmented the kidney on CT images to analyze renal function, renal function analysis is required without radiation exposure by segmenting the renal parenchyma on MR images. However, renal parenchyma segmentation is difficult due to small area in the abdomen, blurred boundary, large variations in the shape of kidney among patients, similar intensities with nearby organs such as the liver, spleen and vessels. Furthermore, signal intensity is different for each data due to a lot of movement when taking abdominal MR even when photographed with the same device. Therefore, we propose cascaded deep convolutional neural network for renal parenchyma segmentation with signal intensity correction in abdominal MR images. First, intensity normalization is performed in the whole MR image. Second, kidney is localized using 2D segmentation networks based on attention UNet on the axial, coronal, sagittal planes and combining through a majority voting. Third, signal intensity correction between each data is performed in the localized kidney area. Finally, renal parenchyma is segmented using 3D segmentation network based on UNet++. The average DSC of renal parenchyma was 91.57%. Our method can be used to assess contralateral renal hypertrophy and to predict renal function by measuring volume change of the renal parenchyma on MR images without radiation exposure instead of CT images, and can establish basis for treatment after RPN.
Meniscus segmentation from knee MR images is an essential step in finding the most suitable implant prototype for meniscus allograft transplantation using a 3D reconstruction model from the patient’s normal meniscus. However, the segmentation of the meniscus is challenging due to its thin shape, similar intensities with nearby structures such as cruciate and collateral ligaments in the knee MR images, large shape variations among patients, and inhomogeneous intensity within the meniscus. In addition, conventional deep convolutional neural network (DCNN)-based meniscus segmentation method mainly uses a pixel-wise objective function, and thus produces rather under-segmentation results due to the small shape of the meniscus or suffers from false positives that occur around the meniscus. To overcome these limitations, we propose a two-stage DCNN that combines a 2D U-Net-based meniscus localization network with a conditional generative adversarial network-based segmentation network. To efficiently localize the meniscus region to medial and lateral meniscus and feed the localized ROIs to the segmentation network, 2D U-Net-based DCNN segments knee MR images into six classes. To segment the medial and lateral meniscus while preventing under-segmentation due to intensity inhomogeneity within the meniscus and over-segmentation due to intensity similarity with surrounding structures, adversarial learning is performed repeatedly on the localized meniscus ROIs. The average DSC of the meniscus was 84.06% at the medial meniscus, and 83.19% at the lateral meniscus, respectively. These results showed that the proposed method prevented the meniscus from being over- and under-segmented by repeatedly judging and complementing the quality of segmentation results through adversarial learning.
We propose an automatic segmentation method of meniscus using cascaded segmentation network consisting of 2D and 3D convolutional neural networks and 2D conditional random fields in knee MR images. First, 2D segmentation network and 2D conditional random fields are performed to narrow the field of view of the medial and lateral meniscus. Second, 3D segmentation network considering local and spatial information is performed to segment the medial and lateral meniscus. The 2D segmentation network showed under-segmentation inside the meniscus. The under-segmentation was prevented after 2D CRF, but over-segmentation occurred in nearby ligaments with similar intensity. The 3D segmentation network prevented under- and over-segmentation due to considering local and spatial information, and showed the best performance. The average dice similarity coefficients of proposed method were 92.27% and 90.27% at medial and lateral meniscus, showed better results of 4.78% and 9.96% at medial meniscus, 3.94% and 9.58% at lateral meniscus compared to the segmentation method using 2D U-Net results and combined 2D U-Net and 2D CRF, respectively. The medial meniscus shows higher accuracy than the lateral meniscus due to less leakage into the collateral ligament.
Segmentation of the renal parenchyma which consists of renal cortex and medulla responsible for the renal function is necessary to evaluate contralateral renal hypertrophy and to predict renal function after renal partial nephrectomy (RPN). However, segmentation of the renal parenchyma is difficult due to the large variations in the shape of the kidney among patients and similar intensities with nearby organs such as the liver, spleen, vessels and the collecting system. Therefore, we propose an automatic renal parenchyma segmentation based on 2D and 3D deep convolutional neural networks with similar atlas selection and transformation in abdominal CT images. First, kidney is localized using 2D segmentation networks based on U-net on the axial, coronal, and sagittal planes and combining through a majority voting. Second, similar atlases to test volume in the training set are selected by calculating mutual information between the kidney test volume and the training volume, and then transformed to the test volume using volume-based affine registration. Finally, renal parenchyma is segmented using 3D segmentation network based on U-net. The average dice similarity coefficient of renal parenchyma was 94.59%, showed better results of 10.41% and 0.80% compared to the segmentation method using fusion of three 2D segmentation networks results and combined 2D and 3D segmentation networks, respectively. Our method can be used to assess the contralateral renal hypertrophy and to predict the renal function by measuring the volume change of the renal parenchyma, and can establish the basis for treatment after RPN.
We propose an automatic segmentation of meniscus from knee MR images using multi-atlas segmentation and patchbased edge classification. To prevent registration to large tissues, meniscus is targeted using segmented bone and articular cartilage information. To segment the meniscus with large shape variations and remove leakage to the collateral ligaments robustly, meniscus is segmented using shape- and intensity-based locally-weighted voting (LWV) and patchbased edge classification. Experimental result shows that the Dice similarity coefficient of proposed method as comparison with two manually outlining results provides over 80% in average and is improved compared to LWV based on multi-atlas.
Segmentation of the renal parenchyma consisting of the cortex and the medulla responsible for the renal function is necessary to assess contralateral renal hypertrophy and to predict renal function after renal partial nephrectomy (RPN). In this paper, we propose an automatic renal parenchyma segmentation from abdominal CT images using multi-atlas methods with intensity and shape constraints. First, atlas selection is performed to select the training images in a training set which is similar in appearance to the target image using volume-based registration and intensity similarity. Second, renal parenchyma is segmented using volume- and model-based registration and intensity-constrained locally-weighted voting to segment the cortex and medulla with different intensities. Finally, the cortex and medulla are refined with the threshold value selected by applying a Gaussian mixture model and the cortex slab accumulation map to reduce leakage to the adjacent organs with similar intensity to the medulla and under-segmented area due to lower intensity than the training set. The average dice similarity coefficient of renal parenchyma was 92.68%, showed better results of 15.84% and 2.47% compared to the segmentation method using majority voting and intensity-constrained locally-weighted voting, respectively. Our method can be used to assess the contralateral renal hypertrophy and to predict the renal function by measuring the volume change of the renal parenchyma, and can establish the basis for treatment after renal partial nephrectomy.
In spite of its clinical importance in diagnosis of osteoarthritis, segmentation of cartilage in knee MRI remains a challenging task due to its shape variability and low contrast with surrounding soft tissues and synovial fluid. In this paper, we propose a multi-atlas segmentation of cartilage in knee MRI with sequential atlas registrations and locallyweighted voting (LWV). First, bone is segmented by sequential volume- and object-based registrations and LWV. Second, to overcome the shape variability of cartilage, cartilage is segmented by bone-mask-based registration and LWV. In experiments, the proposed method improved the bone segmentation by reducing misclassified bone region, and enhanced the cartilage segmentation by preventing cartilage leakage into surrounding similar intensity region, with the help of sequential registrations and LWV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.