Thyroid nodules are extremely common lesions and highly detectable by ultrasound (US). Several studies have shown that the overall incidence of papillary thyroid cancer in patients with nodules selected for biopsy is only about 10%. Therefore, there is a clinical need for a dramatic reduction of thyroid biopsies. In this study, we present a guided classification system using deep learning that predicts malignancy of nodules from B-mode US. We retrospectively collected transverse and longitudinal images of 150 benign and 150 malignant thyroid nodules with biopsy proven results. We divided our dataset into training (n=460), validation(n=40), and test (n=100) datasets. We manually segmented nodules from B-mode US images and provided the nodule mask as a second input channel to the convolutional neural network (CNN) for increasing the attention of nodule regions in images. We evaluated the classification performance of different CNN architectures such as Inception and Resnet50 CNN architectures with different input images. The InceptionV3 model showed the best performance on the test dataset: 86% (sensitivity), 90% (specificity), and 90% precision when the threshold was set for highest accuracy. When the threshold was set for maximum sensitivity (0 missed cancers), the ROC curve suggests the number of biopsies may be reduced by 52% without missing patients with malignant thyroid nodules. We anticipate that this performance can be further improved with including more patients and the information from other ultrasound modalities.
Removing non-brain tissues such as the skull, scalp and face from head computed tomography (CT) images is an important field of study in brain image processing applications. It is a prerequisite step in numerous quantitative imaging analyses of neurological diseases as it improves the computational speed and accuracy of quantitative analyses and image coregistration. In this study, we present an accurate method based on fully convolutional neural networks (fCNN) to remove non-brain tissues from head CT images in a time-efficient manner. The method includes an encoding part which has sequential convolutional filters that produce activation maps of the input image in low dimensional space; and it has a decoding part consisting of convolutional filters that reconstruct the input image from the reduced representation. We trained the fCNN on 122 volumetric head CT images and tested on 22 unseen volumetric CT head images based on an expert’s manual brain segmentation masks. The performance of our method on the test set is: Dice Coefficient= 0.998±0.001 (mean ± standard deviation), recall=0.998±0.001, precision=0.998±0.001, and accuracy=0.9995±0.0001. Our method extracts complete volumetric brain from head CT images in 2s which is much faster than previous methods. To the best of our knowledge, this is the first study using fCNN to perform skull stripping from CT images. Our approach based on fCNN provides accurate extraction of brain tissue from head CT images in a time-efficient manner.
In this work, we investigate nonrigid motion compensation in simultaneously acquired (side-by-side) B-mode ultrasound (BMUS) and contrast enhanced ultrasound (CEUS) image sequences of the carotid artery. These images are acquired to study the presence of intraplaque neovascularization (IPN), which is a marker of plaque vulnerability. IPN quantification is visualized by performing the maximum intensity projection (MIP) on the CEUS image sequence over time. As carotid images contain considerable motion, accurate global nonrigid motion compensation (GNMC) is required prior to the MIP. Moreover, we demonstrate that an improved lumen and plaque differentiation can be obtained by averaging the motion compensated BMUS images over time. We propose to use a previously published 2D+t nonrigid registration method, which is based on minimization of pixel intensity variance over time, using a spatially and temporally smooth B-spline deformation model. The validation compares displacements of plaque points with manual trackings by 3 experts in 11 carotids. The average (± standard deviation) root mean square error (RMSE) was 99±74μm for longitudinal and 47±18μm for radial displacements. These results were comparable with the interobserver variability, and with results of a local rigid registration technique based on speckle tracking, which estimates motion in a single point, whereas our approach applies motion compensation to the entire image. In conclusion, we evaluated that the GNMC technique produces reliable results. Since this technique tracks global deformations, it can aid in the quantification of IPN and the delineation of lumen and plaque contours.
Patients with carotid atherosclerotic plaques carry an increased risk of cardiovascular events such as stroke. Ultrasound has been employed as a standard for diagnosis of carotid atherosclerosis. To assess atherosclerosis, the intima contour of the carotid artery lumen should be accurately outlined. For this purpose, we use simultaneously acquired side-by-side longitudinal contrast enhanced ultrasound (CEUS) and B-mode ultrasound (BMUS) images and exploit the information in the two imaging modalities for accurate lumen segmentation. First, nonrigid motion compensation is performed on both BMUS and CEUS image sequences, followed by averaging over the 150 time frames to produce an image with improved signal-to-noise ratio (SNR). After that, we segment the lumen from these images using a novel method based on dynamic programming which uses the joint histogram of the CEUS and BMUS pair of images to distinguish between background, lumen, tissue and artifacts. Finally, the obtained lumen contour in the improved-SNR mean image is transformed back to each time frame of the original image sequence. Validation was done by comparing manual lumen segmentations of two independent observers with automated lumen segmentations in the improved-SNR images of 9 carotid arteries from 7 patients. The root mean square error between the two observers was 0.17±0.10mm and between automated and average of manual segmentation of two observers was 0.19±0.06mm. In conclusion, we present a robust and accurate carotid lumen segmentation method which overcomes the complexity of anatomical structures, noise in the lumen, artifacts and echolucent plaques by exploiting the information in this combined imaging modality.
In several studies, intraplaque neovascularization (IPN) has been linked with plaque vulnerability. The recent development of contrast enhanced ultrasound enables IPN detection, but an accurate quantification of IPN is a big challenge due to noise, motion, subtle contrast response, blooming of contrast and artifacts. We present an algorithm that automatically estimates the location and amount of contrast within the plaque over time. Plaque pixels are initially labeled through an iterative expectation-maximization (EM) algorithm. The used algorithm avoids several drawbacks of standard EM. It is capable of selecting the best number of components in an unsupervised way, based on a minimum message length criterion. Next, neighborhood information using a 5×5 kernel and spatiotemporal behavior are combined with the known characteristics of contrast spots in order to group components, identify artifacts and finalize the classification. Image sequences are divided into 3-seconds subgroups. A pixel is relabeled as an artifact if it is labeled as contrast for more than 1.5 seconds in at least two subgroups. For 10 plaques, automated segmentation results were validated with manual segmentation of contrast in 10 frames per clip. Average Dice index and area ratio were 0.73±0.1 (mean±SD) and 98.5±29.6 (%) respectively. Next, 45 atherosclerotic plaques were analyzed. Time integrated IPN surface area was calculated. Average area of IPN was 3.73±3.51 mm2. Average area of 45 plaques was 11.6±8.6 mm2. This method based on EM contrast segmentation provides a new way of IPN quantification.
Intraplaque neovascularization (IPN) has been linked with progressive atherosclerotic disease and plaque instability in
several studies. Quantification of IPN may allow early detection of vulnerable plaques. A dedicated motion
compensation method with normalized-cross-correlation (NCC) block matching combined with multidimensional
(2D+time) dynamic programming (MDP) was developed for quantification of IPN in small plaques (<30% diameter
stenosis). The method was compared to NCC block matching without MDP (forward tracking (FT)) and showed to
improve motion tracking. Side-by-side CEUS and B-mode ultrasound images of carotid arteries were acquired by a
Philips iU22 system with a L9-3 linear array probe. The motion pattern for the plaque region was obtained from the Bmode
images with MDP. MDP results were evaluated in-vitro by a phantom and in-vivo by comparing to manual
tracking of three experts for multibeat-image-sequences (MIS) of 11 plaques. In the in-vivo images, the absolute error
was 72±55μm (mean±SD) for X (longitudinal) and 34±23μm for Y (radial). The method's success rate was visually
assessed on 67 MIS. The tracking was considered failed if it deviated >2 pixels (~200μm) from true motion in any
frame. Tracking was scored as fully successful in 63 MIS (94%) for MDP vs. 52(78%) for FT. The range of
displacement over these 63 was 1045±471μm (X) and 395±216μm (Y). The tracking sporadically failed in 4 MIS (6%)
due to poor image quality, jugular vein proximity and out-of-plane motion. Motion compensation showed improved
lumen-plaque contrast separation. In conclusion, the proposed method is sufficiently accurate and successful for in vivo
application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.