Second-harmonic generation (SHG) imaging can help reveal interactions between collagen fibers and cancer cells. Quantitative analysis of SHG images of collagen fibers is challenged by the heterogeneity of collagen structures and low signal-to-noise ratio often found while imaging collagen in tissue. The role of collagen in breast cancer progression can be assessed post acquisition via enhanced computation. To facilitate this, we have implemented and evaluated four algorithms for extracting fiber information, such as number, length, and curvature, from a variety of SHG images of collagen in breast tissue. The image-processing algorithms included a Gaussian filter, SPIRAL-TV filter, Tubeness filter, and curvelet-denoising filter. Fibers are then extracted using an automated tracking algorithm called fiber extraction (FIRE). We evaluated the algorithm performance by comparing length, angle and position of the automatically extracted fibers with those of manually extracted fibers in twenty-five SHG images of breast cancer. We found that the curvelet-denoising filter followed by FIRE, a process we call CT-FIRE, outperforms the other algorithms under investigation. CT-FIRE was then successfully applied to track collagen fiber shape changes over time in an in vivo mouse model for breast cancer.
Compressive sampling (CS), or Compressed Sensing, has generated a tremendous amount of excitement in the signal processing community. Compressive sampling, which involves non-traditional samples in the form of randomized projections, can capture most of the salient information in a signal with a relatively small number of samples, often far fewer samples than required using traditional sampling schemes. Adaptive sampling (AS), also called Active Learning, uses information gleaned from previous observations (e.g., feedback) to focus the sampling process. Theoretical and experimental results have shown that adaptive sampling can dramatically outperform conventional (non-adaptive) sampling schemes. This paper compares the theoretical performance of compressive and adaptive sampling for regression in noisy conditions, and it is shown that for certain classes of piecewise constant signals and high SNR regimes both CS and AS are near optimal. This result is remarkable since it is the first evidence that shows that compressive sampling, which is non-adaptive, cannot be significantly outperformed by any other method (including adaptive sampling procedures), even in the presence of noise. The performance of CS schemes for signal detection is also investigated.
Compressive Sampling, or Compressed Sensing, has recently generated
a tremendous amount of excitement in the image processing community. Compressive Sampling involves taking a relatively small number of non-traditional samples in the form of projections of the signal onto random basis elements or random vectors (random projections). Recent results show that such observations can contain most of the salient information in the signal. It follows that if a signal is compressible in some basis, then a very accurate reconstruction can
be obtained from these observations. In many cases this reconstruction is much more accurate than is possible using an equivalent number of conventional point samples. This paper motivates the use of Compressive Sampling for imaging, presents theory predicting reconstruction error rates, and demonstrates its
performance in electronic imaging with an example.
Tree-structured partitions provide a natural framework for rapid and accurate extraction of level sets of a multivariate function f from noisy data. In general, a level set S is the set on which f exceeds some critical value (e.g. S = {x : f(x) ≥ γ}). Boundaries of such sets typically constitute manifolds embedded in the high-dimensional observation space. The identification of these boundaries is an important theoretical problem with applications for digital elevation maps, medical imaging, and pattern recognition. Because set identification is intrinsically simpler than function denoising or estimation, explicit set extraction methods can achieve higher accuracy than more indirect approaches (such as extracting a set of interest from an estimate of the function). The trees underlying our method are constructed by minimizing a complexity regularized data-fitting term over a family of dyadic partitions. Using this framework, problems such as simultaneous estimation of multiple (non-intersecting) level lines of a function can be readily solved from both a theoretical and practical perspective. Our method automatically adapts to spatially varying regularity of both the boundary of the level set and the function underlying the data. Level set extraction using multiresolution trees can be implemented in near linear time and specifically aims to minimize an error metric sensitive to both the error in the location of the level set and the distance of the function from the critical level. Translation invariant "voting-over-shifts" set estimates can also be computed rapidly using an algorithm based on the undecimated wavelet transform.
The nonparametric multiscale polynomial and platelet methods presented here are powerful new tools for signal and image denoising and reconstruction. Unlike traditional wavelet-based multiscale methods, these methods are both well suited to processing Poisson or multinomial data and capable of preserving image edges. At the heart of these new methods lie multiscale signal decompositions based on polynomials in one dimension and multiscale image decompositions based on what the authors call platelets in two dimensions. Platelets are localized functions at various positions, scales and orientations that can produce highly accurate, piecewise linear approximations to images consisting of smooth regions separated by smooth boundaries. Polynomial and platelet-based maximum penalized likelihood methods for signal and image analysis are both tractable and computationally efficient. Polynomial methods offer near minimax convergence rates for broad classes of functions including Besov spaces. Upper bounds on the estimation error are derived using an information-theoretic risk bound based on squared Hellinger loss. Simulations establish the practical effectiveness of these methods in applications such as density estimation, medical imaging, and astronomy.
Despite the success of wavelet decompositions in other areas of statistical signal and image processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations (e.g., translation, rotation, location of lighting source) inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown translation and rotation. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR (Template Learning from Atomic Representations), a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length (MDL) complexity regularization to learn a template with a sparse representation in the wavelet domain. We discuss several applications, including template learning, pattern classification, and image registration.
In this paper, we present an unsupervised scheme aimed at segmentation of laser radar (LADAR) imagery for Automatic Target Detection. A coding theoretic approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. MDL is used to penalize overly complex segmentations. The intensity data is modeled as a Gaussian random field whose mean and variance functions are piecewise constant across the image. This model is intended to capture variations in both mean value (intensity) and variance (texture). The segmentation algorithm is based on an adaptive rectangular recursive partitioning scheme. We implement a robust constant false alarm rate (CFAR) detector on the segmented intensity image for target detection and compare our results with the conventional cell averaging (CA) CFAR detector.
In this paper, a coding theoretic approach is presented for the unsupervised segmentation of SAR images. The approach implements Rissanen's concept of Minimum Description Length (MDL) for estimating piecewise homogeneous regions. Our image model is a Gaussian random field whose mean and variance functions are piecewise constant across the image. The model is intended to capture variations in both mean value (intensity) and variance (texture). We adopt a multiresolution/progressive encoding approach to this segmentation problem and use MDL to penalize overly complex segmentations. We develop two different approaches both of which achieve fast unsupervised segmentation. One algorithm is based on an adaptive (greedy) rectangular recursive partitioning scheme. The second algorithm is based on an optimally-pruned wedgelet-decorated dyadic partition. We present simulation results on SAR data to illustrate the performance obtained with these segmentation techniques.
Recently the authors introduced a general Bayesian statistical method for modeling and analysis in linear inverse problems involving certain types of count data. Emission-based tomography is medical imaging is a particularly important and common examples of this type of proem. In this paper we provide an overview of the methodology and illustrate its application to problems in emission tomography through a series of simulated and real- data examples. The framework rests on the special manner in which a multiscale representation of recursive dyadic partitions interacts with the statistical likelihood of data with Poisson noise characteristics. In particular, the likelihood function permits a factorization, with respect to location-scale indexing, analogous to the manner in which, say, an arbitrary signal allows a wavelet transform. Recovery of an object from tomographic data is the posed as a problem involving the statistical estimation of a multiscale parameter vector. A type of statistical shrinkage estimation is used, induced by careful choice of a Bayesian prior probability structure for the parameters. Finally, the ill-posedness of the tomographic imaging problem is accounted for by embedding the above-described framework within a larger, but simpler statistical algorithm problem, via the so-called Expectation-Maximization approach. The resulting image reconstruction algorithm is iterative in nature, entailing the calculation of two closed-form algebraic expression at each iteration. Convergence of the algorithm to a unique solution, under appropriate choice of Bayesian prior, can be assured.
Despite their success in other areas of statistical signal processing, current wavelet-based image models are inadequate for modeling patterns in images, due to the presence of unknown transformations inherent in most pattern observations. In this paper we introduce a hierarchical wavelet-based framework for modeling patterns in digital images. This framework takes advantage of the efficient image representations afforded by wavelets, while accounting for unknown pattern transformations. Given a trained model, we can use this framework to synthesize pattern observations. If the model parameters are unknown, we can infer them from labeled training data using TEMPLAR, a novel template learning algorithm with linear complexity. TEMPLAR employs minimum description length complexity regularization to learn a template with a sparse representation in the wavelet domain. We illustrate template learning with examples, and discuss how TEMPLAR applies to pattern classification and denoising from multiple, unaligned observations.
We present a new approach to SAR image segmentation based on a Poisson approximation to the SAR amplitude image. It has been established that SAR amplitude images are well approximated using Rayleigh distributions. We show that, with suitable modifications, we can model piecewise homogeneous regions (such as tanks, roads, scrub, etc.) within the SAR amplitude image using a Poisson model that bears a known relation to the underlying Rayleigh distribution. We use the Poisson model to generate an efficient tree-based segmentation algorithm guided by the minimum description length (MDL) criteria. We present a simple fixed tree approach, and a more flexible adaptive recursive partitioning scheme. The segmentation is unsupervised, requiring no prior training, and very simple, efficient, and effective for identifying possible regions of interest (targets). We present simulation results on MSTAR clutter data to demonstrate the performance obtained with this parsing technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.