Premature neonates with intraventricular hemorrhage (IVH) followed by post hemorrhagic hydrocephalus (PHH) are at high risk for morbidity and mortality. Cranial ultrasound (CUS) is the most common imaging technique for early diagnosis of PHH during the first weeks after birth. Head size is one of the important indexes in the evaluation of PHH with CUS. In this paper, we present an automatic cranial localization method to help head size measurement in 2D CUS images acquired from premature neonates with IVH. We employ deep neural networks to localize the cranial region and minimum area bounding box. Separate deep neural networks are trained to detect the space parameters (position, scale, and orientation) of the bounding box. We evaluated the performance of the method on a set of 64 2D CUS images obtained from premature neonates with IVH through five-fold cross validation. Experimental results showed that the proposed method could estimate the cranial bounding box with the center point position error value of 0.33 ± 0.32 mm, the orientation error value of 1.75 ± 1.31 degrees, head height relative error (RE) value of 1.62 ± 2.9 %, head width RE value of 1.22 ± 1.24 %, head surface RE value of 2.27 ± 3.04 %, average Dice similarity score of 0.97 ± 0.01, and Hausdorff distance of 0.69 ± 0.46 mm. The method is computationally efficient and has the potential to provide automatic head size measurement in the clinical evaluation of neonates.
This paper presents a quantitative imaging method and software technology to predict the risk and assess the severity of respiratory diseases in premature babies by fusing information from multiple sources: non-invasive low-radiation chest X-ray (CXR) imaging and clinical parameters. Prematurity is the largest single cause of death in children under five in the world. Lower respiratory tract infections (LRTI) are the top cause of hospitalization and mortality in prematurity. However, there is no objective clinical marker to predict and prevent severe LRTI in the 15 million babies born prematurely every year worldwide. Traditionally, imaging biomarkers of lung disease from computed tomography have been successfully used in adults, but they entail heightened risks for children due to cumulative radiation and the need for sedation. The proposed technology is the first approach that uses low-radiation CXR imaging to predict hospitalization due to LRTI in prematurity. The method uses deep learning to quantify heterogeneous patterns (air trapping and irregular opacities) in the chest, which are combined with clinical parameters to predict the risk of LRTI. Our preliminary results obtained using a data obtained from ten premature subjects with LRTI showed high correlation between our imaging biomarkers and the rehospitalization of these subjects R2=0.98).
Ultrasound (US) imaging is the routine and safe diagnostic modality for detecting pediatric urology problems, such as hydronephrosis in the kidney. Hydronephrosis is the swelling of one or both kidneys because of the build-up of urine. Early detection of hydronephrosis can lead to a substantial improvement in kidney health outcomes. Generally, US imaging is a challenging modality for the evaluation of pediatric kidneys with different shape, size, and texture characteristics. The aim of this study is to present an automatic detection method to help kidney analysis in pediatric 3DUS images. The method localizes the kidney based on its minimum volume oriented bounding box) using deep neural networks. Separate deep neural networks are trained to estimate the kidney position, orientation, and scale, making the method computationally efficient by avoiding full parameter training. The performance of the method was evaluated using a dataset of 45 kidneys (18 normal and 27 diseased kidneys diagnosed with hydronephrosis) through the leave-one-out cross validation method. Quantitative results show the proposed detection method could extract the kidney position, orientation, and scale ratio with root mean square values of 1.3 ± 0.9 mm, 6.34 ± 4.32 degrees, and 1.73 ± 0.04, respectively. This method could be helpful in automating kidney segmentation for routine clinical evaluation.
Automated tissue characterization is one of the major applications of computer-aided diagnosis systems. Deep learning techniques have recently demonstrated impressive performance for the image patch-based tissue characterization. However, existing patch-based tissue classification techniques struggle to exploit the useful shape information. Local and global shape knowledge such as the regional boundary changes, diameter, and volumetrics can be useful in classifying the tissues especially in scenarios where the appearance signature does not provide significant classification information. In this work, we present a deep neural network-based method for the automated segmentation of the tumors referred to as optic pathway gliomas (OPG) located within the anterior visual pathway (AVP; optic nerve, chiasm or tracts) using joint shape and appearance learning. Voxel intensity values of commonly used MRI sequences are generally not indicative of OPG. To be considered an OPG, current clinical practice dictates that some portion of AVP must demonstrate shape enlargement. The method proposed in this work integrates multiple sequence magnetic resonance image (T1, T2, and FLAIR) along with local boundary changes to train a deep neural network. For training and evaluation purposes, we used a dataset of multiple sequence MRI obtained from 20 subjects (10 controls, 10 NF1+OPG). To our best knowledge, this is the first deep representation learning-based approach designed to merge shape and multi-channel appearance data for the glioma detection. In our experiments, mean misclassification errors of 2:39% and 0:48% were observed respectively for glioma and control patches extracted from the AVP. Moreover, an overall dice similarity coefficient of 0:87±0:13 (0:93±0:06 for healthy tissue, 0:78±0:18 for glioma tissue) demonstrates the potential of the proposed method in the accurate localization and early detection of OPG.
Representation learning through deep learning (DL) architecture has shown tremendous potential for identification, local-
ization, and texture classification in various medical imaging modalities. However, DL applications to segmentation of
objects especially to deformable objects are rather limited and mostly restricted to pixel classification. In this work, we
propose marginal shape deep learning (MaShDL), a framework that extends the application of DL to deformable shape
segmentation by using deep classifiers to estimate the shape parameters. MaShDL combines the strength of statistical
shape models with the automated feature learning architecture of DL. Unlike the iterative shape parameters estimation
approach of classical shape models that often leads to a local minima, the proposed framework is robust to local minima
optimization and illumination changes. Furthermore, since the direct application of DL framework to a multi-parameter
estimation problem results in a very high complexity, our framework provides an excellent run-time performance solution
by independently learning shape parameter classifiers in marginal eigenspaces in the decreasing order of variation. We
evaluated MaShDL for segmenting the lung field from 314 normal and abnormal pediatric chest radiographs and obtained
a mean Dice similarity coefficient of 0:927 using only the four highest modes of variation (compared to 0:888 with classical
ASM1 (p-value=0:01) using same configuration). To the best of our knowledge this is the first demonstration of using DL
framework for parametrized shape learning for the delineation of deformable objects.
Hydronephrosis is the most common abnormal finding in pediatric urology. Thanks to its non-ionizing nature, ultrasound (US) imaging is the preferred diagnostic modality for the evaluation of the kidney and the urinary track. However, due to the lack of correlation of US with renal function, further invasive and/or ionizing studies might be required (e.g., diuretic renograms). This paper presents a computer-aided diagnosis (CAD) tool for the accurate and objective assessment of pediatric hydronephrosis based on morphological analysis of kidney from 3DUS scans. The integration of specific segmentation tools in the system, allows to delineate the relevant renal structures from 3DUS scans of the patients with minimal user interaction, and the automatic computation of 90 anatomical features. Using the washout half time (T1/2) as indicative of renal obstruction, an optimal subset of predictive features is selected to differentiate, with maximum sensitivity, those severe cases where further attention is required (e.g., in the form of diuretic renograms), from the noncritical ones. The performance of this new 3DUS-based CAD system is studied for two clinically relevant T1/2 thresholds, 20 and 30 min. Using a dataset of 20 hydronephrotic cases, pilot experiments show how the system outperforms previous 2D implementations by successfully identifying all the critical cases (100% of sensitivity), and detecting up to 100% (T1/2 = 20 min) and 67% (T1/2 = 30 min) of non-critical ones for T1/2 thresholds of 20 and 30 min, respectively.
Pleural effusion is an abnormal collection of fluid within the pleural cavity. Excessive accumulation of pleural fluid is an important bio-marker for various illnesses, including congestive heart failure, pneumonia, metastatic cancer, and pulmonary embolism. Quantification of pleural effusion can be indicative of the progression of disease as well as the effectiveness of any treatment being administered. Quantification, however, is challenging due to unpredictable amounts and density of fluid, complex topology of the pleural cavity, and the similarity in texture and intensity of pleural fluid to the surrounding tissues in computed tomography (CT) scans. Herein, we present an automated method for the segmentation of pleural effusion in CT scans based on spatial context information. The method consists of two stages: first, a probabilistic pleural effusion map is created using multi-atlas segmentation. The probabilistic map assigns a priori probabilities to the presence of pleural uid at every location in the CT scan. Second, a statistical pattern classification approach is designed to annotate pleural regions using local descriptors based on a priori probabilities, geometrical, and spatial features. Thirty seven CT scans from a diverse patient population containing confirmed cases of minimal to severe amounts of pleural effusion were used to validate the proposed segmentation method. An average Dice coefficient of 0.82685 and Hausdorff distance of 16.2155 mm was obtained.
Modern automated microscopic imaging techniques such as high-content screening (HCS), high-throughput
screening, 4D imaging, and multispectral imaging are capable of producing hundreds to thousands of images
per experiment. For quick retrieval, fast transmission, and storage economy, these images should be saved in
a compressed format. A considerable number of techniques based on interband and intraband redundancies of
multispectral images have been proposed in the literature for the compression of multispectral and 3D temporal
data. However, these works have been carried out mostly in the elds of remote sensing and video processing.
Compression for multispectral optical microscopy imaging, with its own set of specialized requirements, has
remained under-investigated. Digital photography{oriented 2D compression techniques like JPEG (ISO/IEC
IS 10918-1) and JPEG2000 (ISO/IEC 15444-1) are generally adopted for multispectral images which optimize
visual quality but do not necessarily preserve the integrity of scientic data, not to mention the suboptimal
performance of 2D compression techniques in compressing 3D images.
Herein we report our work on a new low bit-rate wavelet-based compression scheme for multispectral fluorescence
biological imaging. The sparsity of signicant coefficients in high-frequency subbands of multispectral
microscopic images is found to be much greater than in natural images; therefore a quad-tree concept such as
Said et al.'s SPIHT1 along with correlation of insignicant wavelet coefficients has been proposed to further
exploit redundancy at high-frequency subbands. Our work propose a 3D extension to SPIHT, incorporating a
new hierarchal inter- and intra-spectral relationship amongst the coefficients of 3D wavelet-decomposed image.
The new relationship, apart from adopting the parent-child relationship of classical SPIHT, also brought forth
the conditional "sibling" relationship by relating only the insignicant wavelet coefficients of subbands at the
same level of decomposition. The insignicant quadtrees in dierent subbands in the high-frequency subband
class are coded by a combined function to reduce redundancy. A number of experiments conducted on microscopic
multispectral images have shown promising results for the proposed method over current state-of-the-art
image-compression techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.