Magnetic Resonance Imaging (MRI) encompasses a set of powerful imaging techniques for understanding brain structure and diagnosing pathology. Various MRI sequences including T1- and T2-weighted provide rich complementary information. However, significant equipment costs and acquisition times have inhibited uptake of this critical technology, adversely impacting health equity globally. To ameliorate these costs associated with brain MRIs, we present pTransGAN, a generative adversarial network (GAN) capable of translating both healthy and unhealthy T1 scans into T2 scans, potentially obviating T2 acquisition. Extending prior GAN-based image translation, we show that the addition of non-adversarial losses, like style and content loss, improves the translations provided, especially making the generated images sharper, and making the model more robust. Additionally in previous studies, separate models have been created for healthy and unhealthy brain MRI. Thus here, we also present a novel simultaneous training protocol that allows pTransGAN to concurrently train on healthy and unhealthy data sampled from two open brain MRI datasets. As measured by novel metrics that closely match perceptual similarity of human observers, our simultaneously trained pTransGAN model outperforms the models individually trained on just healthy or unhealthy data. These encouraging results should be further validated with independent paired and unpaired clinical datasets.
Intracranial hemorrhage is a critical conditional with the high mortality rate that is typically diagnosed based on head computer tomography (CT) images. Deep learning algorithms, in particular, convolution neural networks (CNN), are becoming the methodology of choice in medical image analysis for a variety of applications such as computer-aided diagnosis, and segmentation. In this study, we propose a fully automated deep learning framework which learns to detect brain hemorrhage based on cross sectional CT images. The dataset for this work consists of 40,367 3D head CT studies (over 1.5 million 2D images) acquired retrospectively over a decade from multiple radiology facilities at Geisinger Health System. The proposed algorithm first extracts features using 3D CNN and then detects brain hemorrhage using the logistic function as the last layer of the network. Finally, we created an ensemble of three different 3D CNN architectures to improve the classification accuracy. The area under the curve (AUC) of the receiver operator characteristic (ROC) curve of the ensemble of three architectures was 0.87. Their results are very promising considering the fact that the head CT studies were not controlled for slice thickness, scanner type, study protocol or any other settings. Moreover, the proposed algorithm reliably detected various types of hemorrhage within the skull. This work is one of the first applications of 3D CNN trained on a large dataset of cross sectional medical images for detection of a critical radiological condition
Cardiothoracic ratio (CTR) is a widely used radiographic index to assess heart size on chest X-rays (CXRs). Recent studies have suggested that also two-dimensional CTR might contain clinical information about the heart function. However, manual measurement of such indices is both subjective and time consuming. This study proposes a fast algorithm to automatically estimate CTR indices based on CXRs. The algorithm has three main steps: 1) model based lung segmentation, 2) estimation of heart boundaries from lung contours, and 3) computation of cardiothoracic indices from the estimated boundaries. We extended a previously employed lung detection algorithm to automatically estimate heart boundaries without using ground truth heart markings. We used two datasets: a publicly available dataset with 247 images as well as clinical dataset with 167 studies from Geisinger Health System. The models of lung fields are learned from both datasets. The lung regions in a given test image are estimated by registering the learned models to patient CXRs. Then, heart region is estimated by applying Harris operator on segmented lung fields to detect the corner points corresponding to the heart boundaries. The algorithm calculates three indices, CTR1D, CTR2D, and cardiothoracic area ratio (CTAR). The method was tested on 103 clinical CXRs and average error rates of 7.9%, 25.5%, and 26.4% (for CTR1D, CTR2D, and CTAR respectively) were achieved. The proposed method outperforms previous CTR estimation methods without using any heart templates. This method can have important clinical implications as it can provide fast and accurate estimate of cardiothoracic indices.
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various
conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest
X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although
many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years,
deep learning has shown state of the art performance in many visual tasks such as object detection, image classification
and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for
segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images
collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The
proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework
achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The
suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our
knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a
heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for
lung field segmentation.
Adipose tissue has been associated with adverse consequences of obesity. Total adipose tissue (TAT) is divided into subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT). Intra-abdominal fat (VAT), located inside the abdominal cavity, is a major factor for the classic obesity related pathologies. Since direct measurement of visceral and subcutaneous fat is not trivial, substitute metrics like waist circumference (WC) and body mass index (BMI) are used in clinical settings to quantify obesity. Abdominal fat can be assessed effectively using CT or MRI, but manual fat segmentation is rather subjective and time-consuming. Hence, an automatic and accurate quantification tool for abdominal fat is needed. The goal of this study is to extract TAT, VAT and SAT fat from abdominal CT in a fully automated unsupervised fashion using energy minimization techniques. We applied a four step framework consisting of 1) initial body contour estimation, 2) approximation of the body contour, 3) estimation of inner abdominal contour using Greedy Snakes algorithm, and 4) voting, to segment the subcutaneous and visceral fat. We validated our algorithm on 952 clinical abdominal CT images (from 476 patients with a very wide BMI range) collected from various radiology departments of Geisinger Health System. To our knowledge, this is the first study of its kind on such a large and diverse clinical dataset. Our algorithm obtained a 3.4% error for VAT segmentation compared to manual segmentation. These personalized and accurate measurements of fat can complement traditional population health driven obesity metrics such as BMI and WC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.