Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities. Many algorithms exist to segment individual organs or organ systems. However, new clinical applications and
the progress in imaging technology will require the segmentation of more and more complex organ systems composed of a
number of substructures, e.g., the heart, the trachea, and the esophagus. The goal of this work is to demonstrate that such
complex organ systems can be successfully segmented by integrating the individual organs into a general model-based
segmentation framework, without tailoring the core adaptation engine to the individual organs. As an example, we address
the fully automatic segmentation of the trachea (around its main bifurcation, including the proximal part of the two main
bronchi) and the esophagus in addition to the heart with all chambers and attached major vessels. To this end, we integrate
the trachea and the esophagus into a model-based cardiac segmentation framework. Specifically, in a first parametric
adaptation step of the segmentation workflow, the trachea and the esophagus share global model transformations with
adjacent heart structures. This allows to obtain a robust, approximate segmentation for the trachea even if it is only partly
inside the field-of-view, and for the esophagus in spite of limited contrast. The segmentation is then refined in a subsequent
deformable adaptation step. We obtained a mean segmentation error of about 0.6mm for the trachea and 2.3mm for the
esophagus on a database of 23 volumetric cardiovascular CT images. Furthermore, we show by quantitative evaluation
that our integrated framework outperforms individual esophagus segmentation, and individual trachea segmentation if the
trachea is only partly inside the field-of-view.
Automatic segmentation is a prerequisite to efficiently analyze the large amount of image data produced by modern imaging
modalities, e.g., computed tomography (CT), magnetic resonance (MR) and rotational X-ray volume imaging. While many
segmentation approaches exist, most of them are developed for a single, specific imaging modality and a single organ. In
clinical practice, however, it is becoming increasingly important to handle multiple modalities: First due to a case-specific
choice of the most suitable imaging modality (e.g. CT versus MR), and second in order to integrate complementary data
from multiple modalities. In this paper, we present a single, integrated segmentation framework which can easily be
adapted to a range of imaging modalities and organs. Our algorithm is based on shape-constrained deformable models. Key
elements are (1) a shape model representing the geometry and variability of the target organ of interest, (2) spatially varying
boundary detection functions representing the gray value appearance of the organ boundaries for the specific imaging
modality or protocol, and (3) a multi-stage segmentation approach. Focussing on fully automatic heart segmentation, we
present evaluation results for CT,MR (contrast enhanced and non-contrasted), and rotational X-ray angiography (3-D RA).
We achieved a mean segmentation error of about 0.8mm for CT and (non-contrasted) MR, 1.0mm for contrast-enhanced
MR and 1.3mm for 3-D RA, demonstrating the success of our segmentation framework across modalities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.