Ultrasound (US) is the modality of choice for fetal screening, which includes the assessment of a variety of standardized growth measurements, like the abdominal circumference (AC). Screening guidelines define criteria on the scan plane, in which the measurement is taken. As US is increasingly becoming a 3D modality, approaches for automated determination of the optimal scan plane in a volumetric dataset would greatly improve the workflow. In this work, a novel framework for deep hyperplane learning is proposed and applied for view plane estimation in fetal US examinations. The approach is tightly integrated in the clinical workflow and consists of two main steps. First, the bounding box around the structure of interest is determined in the central slice (MPR). Second, offsets from the structure in the bounding box to the optimal view plane are estimated. By linear regression through the estimated offsets, the view plane coordinates can then be determined. The presented approach is successfully applied on clinical screening data for AC plane estimation and a high accuracy is obtained, outperforming or comparable to recent publications on the same application.
Ultrasound is increasingly becoming a 3D modality. Mechanical and matrix array transducers are able to deliver 3D images with good spatial and temporal resolution. The 3D imaging facilitates the application of automated image analysis to enhance workflows, which has the potential to make ultrasound a less operator dependent modality. However, the analysis of the more complex 3D images and definition of all examination standards on 2D images pose barriers to the use of 3D in daily clinical practice. In this paper, we address a part of the canonical fetal screening program, namely the localization of the abdominal cross-sectional plane with the corresponding measurement of the abdominal circumference in this plane. For this purpose, a fully automated pipeline has been designed starting with a random forest based anatomical landmark detection. A feature trained shape model of the fetal torso including inner organs with the abdominal cross-sectional plane encoded into the model is then transformed into the patient space using the landmark localizations. In a free-form deformation step, the model is individualized to the image, using a torso probability map generated by a convolutional neural network as an additional feature image. After adaptation, the abdominal plane and the abdominal torso contour in that plane are directly obtained. This allows the measurement of the abdominal circumference as well as the rendering of the plane for visual assessment. The method has been trained on 126 and evaluated on 42 abdominal 3D US datasets. An average plane offset error of 5.8 mm and an average relative circumference error of 4.9 % in the evaluation set could be achieved.
Automated interpretation of CT scans is an important, clinically relevant area as the number of such scans is increasing rapidly and the interpretation is time consuming. Anatomy localization is an important prerequisite for any such interpretation task. This can be done by image-to-atlas registration, where the atlas serves as a reference space for annotations such as organ probability maps. Tissue type based atlases allow fast and robust processing of arbitrary CT scans. Here we present two methods which significantly improve organ localization based on tissue types. A first problem is the definition of tissue types, which until now is done heuristically based on experience. We present a method to determine suitable tissue types from sample images automatically. A second problem is the restriction of the transformation space: all prior approaches use global affine maps. We present a hierarchical strategy to refine this global affine map. For each organ or region of interest a localized tissue type atlas is computed and used for a subsequent local affine registration step. A three-fold cross validation on 311 CT images with different fields-of-view demonstrates a reduction of the organ localization error by 33%.
A fully automatic method generating a whole body atlas from CT images is presented. The atlas serves as a reference space for annotations. It is based on a large collection of partially overlapping medical images and a registration scheme. The atlas itself consists of probabilistic tissue type maps and can represent anatomical variations. The registration scheme is based on an entropy-like measure of these maps and is robust with respect to field-of-view variations. In contrast to other atlas generation methods, which typically rely on a sufficiently large set of annotations on training cases, the presented method requires only the images. An iterative refinement strategy is used to automatically stitch the images to build the atlas.
Affine registration of unseen CT images to the probabilistic atlas can be used to transfer reference annotations, e.g. organ models for segmentation initialization or reference bounding boxes for field-of-view selection. The robustness and generality of the method is shown using a three-fold cross-validation of the registration on a set of 316 CT images of unknown content and large anatomical variability. As an example, 17 organs are annotated in the atlas reference space and their localization in the test images is evaluated. The method yields a recall (sensitivity), specificity and precision of at least 96% and thus performs excellent in comparison to competitors.
Prostate and cervix cancer diagnosis and treatment planning that is based on MR images benefit from superior soft tissue contrast compared to CT images. For these images an automatic delineation of the prostate or cervix and the organs at risk such as the bladder is highly desirable. This paper describes a method for bladder segmentation that is based on a watershed transform on high image gradient values and gray value valleys together with the classification of watershed regions into bladder contents and tissue by a graph cut algorithm. The obtained results are superior if compared to a simple region-after-region classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.