Purpose: Robust and accurate segmentation methods for the intracochlear anatomy (ICA) are a critical step in the image-guided cochlear implant programming process. We have proposed an active shape model (ASM)-based method and a deep learning (DL)-based method for this task, and we have observed that the DL method tends to be more accurate than the ASM method while the ASM method tends to be more robust.
Approach: We propose a DL-based U-Net-like architecture that incorporates ASM segmentation into the network. A quantitative analysis is performed on a dataset that consists of 11 cochlea specimens for which a segmentation ground truth is available. To qualitatively evaluate the robustness of the method, an experienced expert is asked to visually inspect and grade the segmentation results on a clinical dataset made of 138 image volumes acquired with conventional CT scanners and of 39 image volumes acquired with cone beam CT (CBCT) scanners. Finally, we compare training the network (1) first with the ASM results, and then fine-tuning it with the ground truth segmentation and (2) directly with the specimens with ground truth segmentation.
Results: Quantitative and qualitative results show that the proposed method increases substantially the robustness of the DL method while having only a minor detrimental effect (though not significant) on its accuracy. Expert evaluation of the clinical dataset shows that by incorporating the ASM segmentation into the DL network, the proportion of good segmentation cases increases from 60/177 to 119/177 when training only with the specimens and increases from 129/177 to 151/177 when pretraining with the ASM results.
Conclusions: A hybrid ASM and DL-based segmentation method is proposed to segment the ICA in CT and CBCT images. Our results show that combining DL and ASM methods leads to a solution that is both robust and accurate.
Cochlear implants (CIs) are neuroprosthetic devices that can improve hearing in patients with severe-to-profound hearing loss. Postoperatively, a CI device needs to be programmed by an audiologist to determine parameter settings that lead to the best outcomes. Our group has developed an image-guided cochlear implant programming (IGCIP) system to simplify this laborious post-programming procedure and improve hearing outcomes. IGCIP requires image processing techniques to analyze the location of the inserted electrode arrays (EAs) with respect to the intracochlear anatomy (ICA). An active shape model (ASM)-based method is currently in routine use in our IGCIP system for ICA segmentation. Recently, we have proposed a hybrid ASM/deep learning (DL) segmentation method that improves segmentation accuracy. In this work, we first evaluate the effect of this method on so-called distance-vs.-frequency curves (DVFs), which permit to visualize electrode interaction and are used to provide programming guidance. An expert evaluation study is then performed to manually configure the electrodes based on the DVFs and grade the quality of the electrode configurations derived from ASM and hybrid ASM/DL segmentations compared to the one derived from ground truth segmentation. Results we have obtained show that the hybrid ASM/DL segmentation technique tends to generate DVFs with smaller frequency error and distance error, and electrode configurations which are comparable to the existing ASM-based method.
Cochlear implants (CIs) are surgically implanted neural prosthetic devices to treat severe-to-profound hearing loss. Accurately localizing the CI electrodes relative to the intracochlear anatomy structures (ICAS) in the post-implantation CT (Post-CT) images of the CI recipients can help audiologists with the post-programming of the CIs. Localizing the electrodes and segmenting the ICAS in the Post-CT images are challenging due to the limited image resolution and the strong artifacts produced by the metallic electrodes. Currently, the most accurate approach to determine the physical relationship between the electrodes and the ICAS is to localize the electrodes in the Post-CT image, segment the ICAS in the pre-implantation CT (Pre-CT) image of the CI recipient, and register the two images. Here we propose a 3D multi-task network to remove the artifacts, segment the ICAS, and localize the electrodes in the Post-CT images simultaneously. Our network is trained with a small image set and achieves comparable segmentation results and encouraging electrode localization results compared to the current state-of-the-art methods. As our method does not require the Pre-CT images, it provides the audiologist with information that guides the programming process even for patients for whom these images are not available.
Cochlear implants (CIs) are surgically implanted neural prosthetic devices used to treat severe-to-profound hearing loss. Our group has developed Image-Guided Cochlear Implant Programming (IGCIP) techniques to assist audiologists with the configuration of the implanted CI electrodes. CI programming is sensitive to the spatial relationship between the electrodes and intra cochlear anatomy (ICA) structures. We have developed algorithms that permit determining the position of the electrodes relative to the ICA structure using pre- and post-implantation CT image pairs. However, these do not extend to CI recipients for whom pre-implantation CT (Pre-CT) images are not available because post-implantation CT (Post-CT) images are affected by strong artifacts introduced by the metallic implant. Recently, we proposed an approach that uses conditional generative adversarial nets (cGANs) to synthesize Pre-CT images from Post-CT images. This permits to use algorithms designed to segment Pre-CT images even when these are not available. We have shown that it substantially and significantly improves the results obtained with our previous published methods that segment post- CT images directly. Here we evaluate the effect of this new approach on the final output of our IGCIP techniques, which is the configuration of the CI electrodes, by comparing configurations of the CI electrodes obtained using the real and the synthetic Pre-CT images. In 22/87 cases synthetic image lead to the same results as the real images. Because more than one configuration may lead to equivalent neural stimulation patterns, visual assessment of solutions is required to compare those that differ. This study is ongoing.
Cochlear implants (CIs) are neuroprosthetic devices that can improve hearing in patients with severe-to-profound hearing loss. Postoperatively, a CI device needs to be programmed by an audiologist to determine parameter settings that lead to the best outcomes. Recently, our group has developed an image-guided cochlear implant programming (IGCIP) system to simplify the traditionally tedious post-programming procedure and improve hearing outcomes. IGCIP requires image processing techniques to analyze the location of inserted electrode arrays (EAs) with respect to the intra-cochlear anatomy (ICA), and robust and accurate segmentation methods for the ICA are a critical step in the process. We have proposed active shape model (ASM)-based method and deep learning (DL)-based method for this task, and we have observed that DL methods tend to be more accurate than ASM methods while ASM methods tend to be more robust. In this work, we propose a U-Net-like architecture that incorporates ASM segmentation into the network so that it can refine the provided ASM segmentation based on the CT intensity image. Results we have obtained show that the proposed method can achieve the same segmentation accuracy as that of the DL-based method and the same robustness as that of the ASM-based method.
Cochlear implants (CIs) use electrode arrays that are surgically inserted into the cochlea to treat patients with hearing loss. For CI recipients, sound bypasses the natural transduction mechanism and directly stimulates the neural regions, thus creating a sense of hearing. Post-operatively, CIs need to be programmed. Traditionally, this is done by an audiologist who is blind to the positions of the electrodes relative to the cochlea and only relies on the subjective response of the patient. Multiple programming sessions are usually needed, which can take a frustratingly long time. We have developed an imageguided cochlear implant programming (IGCIP) system to facilitate the process. In IGCIP, we segment the intra-cochlear anatomy and localize the electrode arrays in the patient’s head CT image. By utilizing their spatial relationship, we can suggest programming settings that can significantly improve hearing outcomes. To segment the intra-cochlear anatomy, we use an active shape model (ASM)-based method. Though it produces satisfactory results in most cases, sub-optimal segmentation still happens. As an alternative, herein we explore using a deep learning method to perform the segmentation task. Large image sets with accurate ground truth (in our case manual delineation) are typically needed to train a deep learning model for segmentation but such a dataset does not exist for our application. To tackle this problem, we use segmentations generated by the ASM-based method to pre-train the model and fine-tune it on a small image set for which accurate manual delineation is available. Using this method, we achieve better results than the ASM-based method.
Chronic graft-versus-host disease (cGVHD) is a frequent and potentially life-threatening complication of allogeneic hematopoietic stem cell transplantation (HCT) and commonly affects the skin, resulting in distressing patient morbidity. The percentage of involved body surface area (BSA) is commonly used for diagnosing and scoring the severity of cGVHD. However, the segmentation of the involved BSA from patient whole body serial photography is challenging because (1) it is difficult to design traditional segmentation method that rely on hand crafted features as the appearance of cGVHD lesions can be drastically different from patient to patient; (2) to the best of our knowledge, currently there is no publicavailable labelled image set of cGVHD skin for training deep networks to segment the involved BSA. In this preliminary study we create a small labelled image set of skin cGVHD, and we explore the possibility to use a fully convolutional neural network (FCN) to segment the skin lesion in the images. We use a commercial stereoscopic Vectra H1 camera (Canfield Scientific) to acquire ~400 3D photographs of 17 cGVHD patients aged between 22 and 72. A rotational data augmentation process is then applied, which rotates the 3D photos through 10 predefined angles, producing one 2D projection image at each position. This results in ~4000 2D images that constitute our cGVHD image set. A FCN model is trained and tested using our images. We show that our method achieves encouraging results for segmenting cGVHD skin lesion in photographic images.
KEYWORDS: Image registration, Head, Magnetic resonance imaging, Brain, 3D image processing, Medical imaging, 3D modeling, Data modeling, Image segmentation, Neuroimaging
Medical image registration establishes a correspondence between images of biological structures, and it is at the core of many applications. Commonly used deformable image registration methods depend on a good preregistration initialization. We develop a learning-based method to automatically find a set of robust landmarks in three-dimensional MR image volumes of the head. These landmarks are then used to compute a thin plate spline-based initialization transformation. The process involves two steps: (1) identifying a set of landmarks that can be reliably localized in the images and (2) selecting among them the subset that leads to a good initial transformation. To validate our method, we use it to initialize five well-established deformable registration algorithms that are subsequently used to register an atlas to MR images of the head. We compare our proposed initialization method with a standard approach that involves estimating an affine transformation with an intensity-based approach. We show that for all five registration algorithms the final registration results are statistically better when they are initialized with the method that we propose than when a standard approach is used. The technique that we propose is generic and could be used to initialize nonrigid registration algorithms for other applications.
KEYWORDS: Neuroimaging, Image registration, Head, Magnetic resonance imaging, Medical imaging, 3D image processing, Machine learning, Image segmentation, Brain, 3D modeling, Data modeling
Medical image registration establishes a correspondence between images of biological structures and it is at the core of
many applications. Commonly used deformable image registration methods are dependent on a good preregistration
initialization. The initialization can be performed by localizing homologous landmarks and calculating a point-based
transformation between the images. The selection of landmarks is however important. In this work, we present a
learning-based method to automatically find a set of robust landmarks in 3D MR image volumes of the head to initialize
non-rigid transformations. To validate our method, these selected landmarks are localized in unknown image volumes
and they are used to compute a smoothing thin-plate splines transformation that registers the atlas to the volumes. The
transformed atlas image is then used as the preregistration initialization of an intensity-based non-rigid registration
algorithm. We show that the registration accuracy of this algorithm is statistically significantly improved when using the
presented registration initialization over a standard intensity-based affine registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.