Automatic whole slide (WS) tissue image segmentation is an important problem in digital pathology. A conventional classification-based method (referred to as CCb method) to tackle this problem is to train a classifier on a pre-built training database (pre-built DB) obtained from a set of training WS images, and use it to classify all image pixels or image patches (test samples) in the test WS image into different tissue types. This method suffers from a major challenge in WS image analysis: the strong inter-slide tissue variability (ISTV), i.e., the variability of tissue appearance from slide to slide. Due to this ISTV, the test samples are usually very different from the training data, which is the source of misclassification. To address the ISTV, we propose a novel method, called slide-adapted classification (SAC), to extend the CCb method. We assume that in the test WS image, besides regions with high variation from the pre-built DB, there are regions with lower variation from this DB. Hence, the SAC method performs a two-stage classification: first classifies all test samples in a WS image (as done in the CCb method) and compute their classification confidence scores. Next, the samples classified with high confidence scores (samples being reliably classified due to their low variation from the pre-built DB) are combined with the pre-built DB to generate an adaptive training DB to reclassify the low confidence samples. The method is motivated by the large size of the test WS image (a large number of high confidence samples are obtained), and the lower variability between the low and high confidence samples (both belonging to the same WS image) compared to the ISTV. Using the proposed SAC method to segment a large dataset of 24 WS images, we improve the accuracy over the CCb method.
In the popular Nottingham histologic score system for breast cancer grading, the pathologist analyzes the H and E tissue slides and assigns a score, in the range of 1-3, for tubule formation, nuclear pleomorphism and mitotic activity in the tumor regions. The scores from these three factors are added to give a final score, ranging from 3-9 to grade the cancer. Tubule score (TS), which reflects tubular formation, is a value in 1-3 given by manually estimating the percentage of glandular regions in the tumor that form tubules. In this paper, given an H and E tissue image representing a tumor region, we propose an automated algorithm to detect glandular regions and detect the presence of tubules in these regions. The algorithm first detects all nuclei and lumen candidates in the input image, followed by identifying tumor nuclei from the detected nuclei and identifying true lumina from the lumen candidates using a random forest classifier. Finally, it forms the glandular regions by grouping the closely located tumor nuclei and lumina using a graph-cut-based method. The glandular regions containing true lumina are considered as the ones that form tubules (tubule regions). To evaluate the proposed method, we calculate the tubule percentage (TP), i.e., the ratio of the tubule area to the total glandular area for 353 H and E images of the three TSs, and plot the distribution of these TP values. This plot shows the clear separation among these three scores, suggesting that the proposed algorithm is useful in distinguishing images of these TSs.
We present a novel algorithm for the registration of multiple temporally related point sets. Although our algorithm is derived in a general setting, our primary motivating application is coronary tree matching in multi-phase cardiac spiral CT. Our algorithm builds upon the fast, outlier-resistant Coherent Point Drift (CPD) algorithm, but incorporates temporal consistency constraints between the point sets, resulting in spatiotemporally smooth displacement fields. We preserve the speed and robustness of the CPD algorithm by using the technique of separable surrogates within an EM (Expectation-Maximization) optimization framework, while still minimizing a global registration cost function employing both spatial and temporal regularization. We demonstrate the superiority of our novel temporally consistent group-wise CPD algorithm over a straightforward pair-wise approach employing the original CPD algorithm, using coronary trees derived from both simulated and real cardiac CT data. In all the tested configurations and datasets, our method presents lower average error between tree landmarks compared to the pairwise method. In the worst case, the difference is around few micrometers but in the better case, our method divides by two the error from the pairwise method. This improvement is especially important for a dataset with numerous outliers. With a fixed set of parameter that has been tuned automatically, our algorithm yields better results than the original CPD algorithm which shows the capacity to register without a priori information on an unknown dataset.
In this work, we have developed a novel knowledge-driven quasi-global method for fast and robust registration of thoracic-abdominal CT and cone beam CT (CBCT) scans. While the use of CBCT in operating rooms has become a common practice, there is an increasing demand on the registration of CBCT with pre-operative scans, in many cases, CT scans. One of the major challenges of thoracic-abdominal CT/CBCT registration is from various fields of view (FOVs) of the two imaging modalities. The proposed approach utilizes a priori knowledge of anatomy to generate 2D anatomy targeted projection (ATP) images that surrogate the original volumes. The use of lower dimension surrogate images can significantly reduce the computation cost of similarity evaluation during optimization and make it practically feasible to perform global optimization based registration for image-guided interventional procedures. Another a priori knowledge about the local optima distribution on energy curves is further used to effectively select multi-starting points for registration optimization. 20 clinical data sets were used to validate the method and the target registration error (TRE) and maximum registration error (MRE) were calculated to compare the performance of the knowledge-driven quasi-global registration against a typical local-search based registration. The local search based registration failed on 60% cases, with an average TRE of 22.9mm and MRE of 28.1mm; the knowledge-driven quasi-global registration achieved satisfactory results for all the 20 data sets, with an average TRE of 3.5mm, and MRE of 2.6mm. The average computation time for the knowledge-driven quasi-global registration is 8.7 seconds.
Graph based semi-automatic tumor segmentation techniques have demonstrated great potential in efficiently measuring
tumor size from CT images. Comprehensive and quantitative validation is essential to ensure the efficacy of graph based
tumor segmentation techniques in clinical applications. In this paper, we present a quantitative validation study of six
graph based 3D semi-automatic tumor segmentation techniques using multiple sets of expert segmentation. The six
segmentation techniques are Random Walk (RW), Watershed based Random Walk (WRW), LazySnapping (LS),
GraphCut (GHC), GrabCut (GBC), and GrowCut (GWC) algorithms. The validation was conducted using clinical CT
data of 29 liver tumors and four sets of expert segmentation. The performance of the six algorithms was evaluated using
accuracy and reproducibility. The accuracy was quantified using Normalized Probabilistic Rand Index (NPRI), which
takes into account of the variation of multiple expert segmentations. The reproducibility was evaluated by the change of
the NPRI from 10 different sets of user initializations. Our results from the accuracy test demonstrated that RW (0.63)
showed the highest NPRI value, compared to WRW (0.61), GWC (0.60), GHC (0.58), LS (0.57), GBC (0.27). The
results from the reproducibility test indicated that GBC is more sensitive to user initialization than the other five
algorithms. Compared to previous tumor segmentation validation studies using one set of reference segmentation, our
evaluation methods use multiple sets of expert segmentation to address the inter or intra rater variability issue in ground
truth annotation, and provide quantitative assessment for comparing different segmentation algorithms.
The task of registering 3D medical images is very computationally expensive. With CPU-based implementations
of registration algorithms it is typical to use various approximations, such as subsampling,
to maintain reasonable computation times. This may however result in suboptimal alignments. With
the constant increase of capabilities and performances of GPUs (Graphics Processing Unit), these
highly vectorized processors have become a viable alternative to CPUs for image related computation
tasks. This paper describes new strategies to implement on GPU the computation of image similarity
metrics for intensity-based registration, using in particular the latest features of NVIDIA's GeForce
8 architecture and the Cg language. Our experimental results show that the computations are many
times faster. In this paper, several GPU implementations of two image similarity criteria for both intramodal
and multi-modal registration have been compared. In particular, we propose a new efficient and
flexible solution based on the geometry shader.
We introduce two image alignment measures using Earth Mover's Distance (EMD) as a metric on the space of
joint intensity distributions. Our first approach consists of computing EMD between a joint distribution and the
product of its marginals. This yields a measure of statistical dependence comparable to Mutual Information, a
criterion widely used for multimodal image registration. When a-priori knowledge is available, we also propose
to compute EMD between the observed distribution and a joint distribution estimated from pairs of pre-aligned
images. EMD is a cross-bin dissimilarity function and generally offers a generalization ability which is superior
to previously proposed metrics, such as Kullback-Leibler divergence. Computing EMD amounts to solving an
optimal mass transport problem whose solution can be very efficiently obtained using an algorithm recently
proposed by Ling and Okada.10 We performed a preliminary experimental evaluation of this approach with
real and simulated MR images. Our results show that EMD-based measures can be efficiently applied to rigid
registration tasks.
We propose a registration method for the alignment of contrast-enhanced CT liver images. It consists of a fluid-based registration algorithm designed to incorporate a volume-preserving constraint. More specifically our objective is to recover an accurate non-rigid transformation in a perfusion study in presence of contrast-enhanced structures which preserves the incompressibility of liver tissues. This transformation is obtained by integrating a smooth divergence-free vector field derived from the gradient of a statistical similarity measure. This gradient is regularized with a fast recursive low-pass filter and is projected onto the space of divergence-free vector fields using a multigrid solver. Both 2D and 3D versions of the algorithm have been implemented. Simulations and experiments show that our approach improves the registration capture range, enforces the imcompressibility constraint with a good level of accuracy, and is computationally efficient. On perfusion studies, this method prevents the shrinkage of contrast-enhanced regions typically observed with standard fluid methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.