Recently, deep learning networks have achieved considerable success in segmenting organs in medical images. Several methods have used volumetric information with deep networks to achieve segmentation accuracy. However, these networks suffer from interference, risk of overfitting, and low accuracy as a result of artifacts, in the case of very challenging objects like the brachial plexuses. In this paper, to address these issues, we synergize the strengths of high-level human knowledge (i.e., Natural Intelligence (NI)) with deep learning (i.e., Artificial Intelligence (AI)) for recognition and delineation of the thoracic Brachial Plexuses (BPs) in Computed Tomography (CT) images. We formulate an anatomy-guided deep learning hybrid intelligence approach for segmenting thoracic right and left brachial plexuses consisting of two key stages. In the first stage (AAR-R), objects are recognized based on a previously created fuzzy anatomy model of the body region with its key organs relevant for the task at hand wherein high-level human anatomic knowledge is precisely codified. The second stage (DL-D) uses information from AAR-R to limit the search region to just where each object is most likely to reside and performs encoder-decoder delineation in slices. The proposed method is tested on a dataset that consists of 125 images of the thorax acquired for radiation therapy planning of tumors in the thorax and achieves a Dice coefficient of 0.659.
Image segmentation is the process of delineating the regions occupied by objects of interest in a given image. This operation is a fundamentally required first step in numerous applications of medical imagery. In the medical imaging field, this activity has a rich literature that spans over 45 years. In spite of numerous advances, including deep learning (DL) networks (DLNs) in recent years, the problem has defied a robust, fail-safe, and satisfactory solution, especially for objects that are manifest with low contrast, are spatially sparse, have variable shape among individuals, or are affected by imaging artifacts, pathology, or post-treatment change in the body. Although image processing techniques, notably DLNs, are uncanny in their ability to harness low-level intensity pattern information on objects, they fall short in the high-level task of identifying and localizing an entire object as a gestalt. This dilemma has been a fundamental unmet challenge in medical image segmentation. In this paper, we demonstrate that by synergistically marrying the unmatched strengths of high-level human knowledge (i.e., natural intelligence (NI)) with the capabilities of DL networks (i.e., artificial intelligence (AI)) in garnering intricate details, these challenges can be significantly overcome. Focusing on the object recognition task, we formulate an anatomy-guided DL object recognition approach named Automatic Anatomy Recognition-Deep Learning (AAR-DL) which combines an advanced anatomy-modeling strategy, model-based non-DL object recognition, and DL object detection networks to achieve expert human-like performance.
Algorithms for image segmentation (including object recognition and delineation) are influenced by the quality of object appearance in the image and overall image quality. However, the issue of how to perform segmentation evaluation as a function of these quality factors has not been addressed in the literature. In this paper, we present a solution to this problem. We devised a set of key quality criteria that influence segmentation (global and regional): posture deviations, image noise, beam hardening artifacts (streak artifacts), shape distortion, presence of pathology, object intensity deviation, and object contrast. A trained reader assigned a grade to each object for each criterion in each study. We developed algorithms based on logical predicates for determining a 1 to 10 numeric quality score for each object and each image from reader-assigned quality grades. We analyzed these object and image quality scores (OQS and IQS, respectively) in our data cohort by gender and age. We performed recognition and delineation of all objects using recent adaptations [8, 9] of our Automatic Anatomy Recognition (AAR) framework [6] and analyzed the accuracy of recognition and delineation of each object. We illustrate our method on 216 head & neck and 211 thoracic cancer computed tomography (CT) studies.
Contouring of the organs at risk is a vital part of routine radiation therapy planning. For the head and neck (H and N) region, this is more challenging due to the complexity of anatomy, the presence of streak artifacts, and the variations of object appearance. In this paper, we describe the latest advances in our Automatic Anatomy Recognition (AAR) approach, which aims to automatically contour multiple objects in the head and neck region on planning CT images. Our method has three major steps: model building, object recognition, and object delineation. First, the better-quality images from our cohort of H and N CT studies are used to build fuzzy models and find the optimal hierarchy for arranging objects based on the relationship between objects. Then, the object recognition step exploits the rich prior anatomic information encoded in the hierarchy to derive the location and pose for each object, which leads to generalizable and robust methods and mitigation of object localization challenges. Finally, the delineation algorithms employ local features to contour the boundary based on object recognition results. We make several improvements within the AAR framework, including finding recognition-error-driven optimal hierarchy, modeling boundary relationships, combining texture and intensity, and evaluating object quality. Experiments were conducted on the largest ensemble of clinical data sets reported to date, including 216 planning CT studies and over 2,600 object samples. The preliminary results show that on data sets with minimal (<4 slices) streak artifacts and other deviations, overall recognition accuracy reaches 2 voxels, with overall delineation Dice coefficient close to 0.8 and Hausdorff Distance within 1 voxel.
Segmentation of organs at risk (OARs) is a key step during the radiation therapy (RT) treatment planning process. Automatic anatomy recognition (AAR) is a recently developed body-wide multiple object segmentation approach, where segmentation is designed as two dichotomous steps: object recognition (or localization) and object delineation. Recognition is the high-level process of determining the whereabouts of an object, and delineation is the meticulous lowlevel process of precisely indicating the space occupied by an object. This study focuses on recognition.
The purpose of this paper is to introduce new features of the AAR-recognition approach (abbreviated as AAR-R from now on) of combining texture and intensity information into the recognition procedure, using the optimal spanning tree to achieve the optimal hierarchy for recognition to minimize recognition errors, and to illustrate recognition performance by using large-scale testing computed tomography (CT) data sets. The data sets pertain to 216 non-serial (planning) and 82 serial (re-planning) studies of head and neck (H&N) cancer patients undergoing radiation therapy, involving a total of ~2600 object samples. Texture property “maximum probability of occurrence” derived from the co-occurrence matrix was determined to be the best property and is utilized in conjunction with intensity properties in AAR-R. An optimal spanning tree is found in the complete graph whose nodes are individual objects, and then the tree is used as the hierarchy in recognition. Texture information combined with intensity can significantly reduce location error for glandrelated objects (parotid and submandibular glands). We also report recognition results by considering image quality, which is a novel concept. AAR-R with new features achieves a location error of less than 4 mm (~1.5 voxels in our studies) for good quality images for both serial and non-serial studies.
In this study, patients who underwent lung transplantation are categorized into two groups of successful (positive) or failed (negative) transplantations according to primary graft dysfunction (PGD), i.e., acute lung injury within 72 hours of lung transplantation. Obesity or being underweight is associated with an increased risk of PGD. Adipose quantification and characterization via computed tomography (CT) imaging is an evolving topic of interest. However, very little research of PGD prediction using adipose quantity or characteristics derived from medical images has been performed.
The aim of this study is to explore image-based features of thoracic adipose tissue on pre-operative chest CT to distinguish between the above two groups of patients. 140 unenhanced chest CT images from three lung transplant centers (Columbia, Penn, and Duke) are included in this study. 124 patients are in the successful group and 16 in failure group. Chest CT slices at the T7 and T8 vertebral levels are captured to represent the thoracic fat burden by using a standardized anatomic space (SAS) approach. Fat (subcutaneous adipose tissue (SAT)/ visceral adipose tissue (VAT)) intensity and texture properties (1142 in total) for each patient are collected, and then an optimal feature set is selected to maximize feature independence and separation between the two groups. Leave-one-out and leave-ten-out crossvalidation strategies are adopted to test the prediction ability based on those selected features all of which came from VAT texture properties. Accuracy of prediction (ACC), sensitivity (SEN), specificity (SPE), and area under the curve (AUC) of 0.87/0.97, 0.87/0.97, 0.88/1.00, and 0.88/0.99, respectively are achieved by the method. The optimal feature set includes only 5 features (also all from VAT), which might suggest that thoracic VAT plays a more important role than SAT in predicting PGD in lung transplant recipients.
Quantification of fat throughout the body is vital for the study of many diseases. In the thorax, it is important for lung transplant candidates since obesity and being underweight are contraindications to lung transplantation given their associations with increased mortality. Common approaches for thoracic fat segmentation are all interactive in nature, requiring significant manual effort to draw the interfaces between fat and muscle with low efficiency and questionable repeatability. The goal of this paper is to explore a practical way for the segmentation of subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) components of chest fat based on a recently developed body-wide automatic anatomy recognition (AAR) methodology. The AAR approach involves 3 main steps: building a fuzzy anatomy model of the body region involving all its major representative objects, recognizing objects in any given test image, and delineating the objects. We made several modifications to these steps to develop an effective solution to delineate SAT/VAT components of fat. Two new objects representing interfaces of SAT and VAT regions with other tissues, SatIn and VatIn are defined, rather than using directly the SAT and VAT components as objects for constructing the models. A hierarchical arrangement of these new and other reference objects is built to facilitate their recognition in the hierarchical order. Subsequently, accurate delineations of the SAT/VAT components are derived from these objects. Unenhanced CT images from 40 lung transplant candidates were utilized in experimentally evaluating this new strategy. Mean object location error achieved was about 2 voxels and delineation error in terms of false positive and false negative volume fractions were, respectively, 0.07 and 0.1 for SAT and 0.04 and 0.2 for VAT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.