PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
We consider herein the matching between two graphs representing road networks. This problem is embedded into a labeling framework. One graph is taken as a reference. A Gibbsian model is proposed to label the other graph. The labels are defined by the noes of the second graph. The potentials are defined by the angle between the nodes and the length of the associated features. Therefore, the model is invariant by translation and rotation. We apply this model to match a road network extracted from a SPOT image on the road network of a cartographic database. This matching provides some information for map updating.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
General frequency modulated signals can be used to characterize many vibrations in dynamic environments, with applications to engine monitoring and sonar. Most work in to parameter estimation of such signals assumes knowledge of the number of carrier frequencies present in the signal. In this paper, we make no such assumption, and use Bayesian techniques to address jointly the problem of model selection and parameter estimation. Following the work of Andrieu and Doucet, who addressed the problem of joint Bayesian model selection and parameter estimation for non-modulated sinusoids in white Gaussian noise, a posterior distribution for the parameter and model order is obtained. This distribution is to o complicated to evaluate analytically, so we use a reversible jump Markov chain Monte Carlo algorithm to draw samples for the distribution. Some simulated examples are presented to illustrate the algorithm's performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-invasive imaging based on wave scattering remain sa difficult problem in those cases where the forward map can only be adequately simulated by solving the appropriate partial-differential equation. We develop a method for solving linear PDEs which is efficient and exact, trading off computation time against storage requirements. The method is based on using the present solution within the Woodbury formula for updating solutions away from changes in the trial image, or state. Hence the method merges well with typical Metropolis-Hastings algorithms using localized update. The scaling of the method as a function of image size and measurement set size is given. We conclude that this method is considerably more efficient than earlier algorithms that we have used to demonstrate sampling for inverse problems in this class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet-domain hidden Markov models have proven to be useful tools for statistical signal and image processing. The hidden Markov tree model captures the key features of the joint density of the wavelet coefficients of real-world data. One potential drawback to the HMT framework is the need for computationally expensive iterative training. In this paper, we prose two reduced-parameter HMT models that capture the general structure of a broad class of real-world images. In the image HMT (iHMT) model we use the fact that for a large class of images the structure of the HMT is self-similar across scale. This allows us to reduce the complexity of the iHMT to just nine easily trained parameters. In the universal HMT (uHMT) we take a Bayesian approach and fix these nine parameters. The uHMT requires no training of any kind. While simple, we show using a series of image estimation/denoising experiments that these two new models retain nearly all of the key structure modeled by the full HMT. Finally, we propose a fast shift-invariant HMT estimation algorithm that outperforms all other wavelet- based estimators in the current literature, both in mean- square error and visual metrics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frequency-domain diffusion imaging is a new imaging modality which uses the magnitude and phase of modulated light propagation through a highly scattering medium to reconstruct an image of the scattering and/or the absorption coefficient in the medium. In this paper, the inversion algorithm is formulated in a Bayesian framework and an efficient optimization technique is presented for calculating the maximum a posteriori image. Numerical result show that the Bayesian framework with the new optimization scheme out-performs conventional approaches in both speed and reconstruction quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of image restoration, optimal binary openings estimate an ideal random set from an observed random set. If we consider optimization relative to a homothetic scalar that governs structuring element sizes, then opening optimization can be placed into the context of optimal granulometric bandpass filters and solution for the optimization problem for the signal-union-noise model can be given in terms of the granulometric spectral densities (GSDs) of the signal and noise. The robustness question arises if the signal and noise GSDs are parameterized, so that the model can assume a family of states: specifically, what is the cost of applying an optimal opening designed for one pair of GSDs to a model corresponding to a different pari of GSDs. This paper addresses the robustness problem in the context of a prior distribution for the parameters governing the signal and noise GSDs. It does so by considering the mean robustness, which is defined for each state of nature to be the expected increase in error resulting from using the optimal opening for that states across all states. Moreover, it considers a global filter that is defined for all states via the expected optimal homothetic scalar. Finally, it compares Bayesian robust openings to minimix robust openings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many experiments, Bernoulli trials are conducted to estimate the probability of an event of interest. The outcomes of the trials are usually known without error, that is, we known with certainty that an event occurred or not. Estimation of the probability of the even then proceeds along lines that can be found in standard textbooks on probability. The problem gets a bit more complicated if the data obtained in a Bernoulli experiment carry uncertainty about the event of interest. We call such trials imperfect Bernoulli trials as opposed to perfect trials when the outcomes of the experiment are known without error. The probability estimation in the case of imperfect trials must be modified to take into account the uncertainties. A complete Bayesian procedure is developed for this purpose. It provides an update formula for the posterior density of the probability of interest as data from new trials are obtained. The work on this program has been motivated by studies in neurophysiology where large sets of patch-clamp recordings of synaptic currents are processed to estimate the probability of a synaptic event. As example, we present application of the methodology to simulated synaptic currents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms for object segmentation are crucial in many image processing applications. During past years, active contour models have been widely used for finding the contours of objects. This segmentation strategy is classically edge based in the sense that the snake is driven to fit the maximum of an edge map of the scene. We have recently proposed a region-based snake approach, that can be implemented using a fast algorithm , to segment an object in an image. The algorithms, optimal in the Maximum Likelihood sense, are based on the calculus of the statistics of the inner and the outer regions and can thus be adapted to different kinds of random fields which can describe the input image. In this paper out aim is to study this approach for tracking application in optronic images. We first show the relevance of using a priori information on the statistical laws of the input image in the case of Gaussian statistics which are well adapted to describe optronic images when a whitening preprocessing is used. We will then characterize the performance of the fast algorithm implementation of the used approach and we will apply it to tracking applications. The efficiency of the proposed method will be shown on real image sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper a stochastic approach for indoor scenes exploration. A major problem in this context is to find and characterize appropriate local features for performing the analysis at the object level. In the present application these features are essentially interest points like corners and vertices, the strategies we have defined are based on direct processing images and collecting evidence to support a specified hypothesis about a scene. This is managed efficiently to by using a probabilistic approach to selecting and grouping fixations points. Opposed to classical solutions we have interested in processing only useful information, we look for this information guided by the local observations in the grouping process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The sparseness and decorrelation properties of the discrete wavelet transform have been exploited to develop powerful denoising methods. Most schemes use arbitrary thresholding nonlinearities with ad hoc parameters, or employ computationally expensive adaptive procedures. We overcome these deficiencies with a new wavelet-based denoising is a step towards objective Bayesian wavelet-based denoising. The result is a remarkably simple fixed non-linear shrinkage/thresholding rule which performs better than other more computationally demanding methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Among the many handwritten character recognition algorithms that have been proposed, few of them use models which are able to simulate handwriting. This can be explained by the fact that simulations require the estimation of strokes starting form statistic imags of letters, while crossing and overlapping strokes make this estimation difficult. In this paper an algorithm to extract overlapping strokes that optimizes the reconstruction of crossings of the image is describes, and a stochastic model of off-line handwritten letter deformation for handwritten letter recognition is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a spatiotemporal energy-based method to estimate motion in image sequences. A directional energy is defined in terms of the 1D Hermite transform coefficients of Radon projections. Radon transform provides a suitable representation for image orientation analysis, while Hermite transform describes image features locally in terms of Gaussian derivatives. These operators have long ben use din computer vision for feature extraction and are relevant in visual system modeling. Here it is shown that the cascaded Radon-Hermite transformation is readily computed as a linear mapping of the 3D Hermite transform coefficients through some steering functions. A directional response defined from the directional energy is used to estimate local motion of 1D and 2D patterns as well as to compute an uncertainty matrix. This matrix provides a confidence measure for our estimate and it is used to propagate the velocity information toward directions with high uncertainty. Practical considerations and experimental results are also of concern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, we present a very fast method (BSREM) for solving regularized problems in emission tomography that is convergent to Maximum a Posteriori (MAO) solutions for convex priors. The method generalizes in a natural way the Expectation Maximization (EM) algorithm and some of its extensions to the MAP case. It consists of decomposing the likelihood function in blocks containing sets of projections plus a block that corresponds to the prior function, and iterating for each block using scaled gradient directions, resembling a smoothed iteration algorithm. In the general nonconvex case, it can be proven that the algorithm converges to a critical point, instead of the sought global maximum. In spite of this, BSREM is so fast and flexible, that it can be used to search for global optima when the model is not convex. In this article, we present an implementation of BSREM, that works as the local optimization method for each step of a Graduated Convexity approach as compared with BSREM itself without any convexifaction. We illustrate the behavior of the method with applications to emission tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a Bayesian PET reconstruction method that incorporates an image prior model with mixed continuity constraints. In this paper we concentrate on imagin the brain which we assume can be partitioned into four tissue classes: gray matter, white matter, cerebral spinal fluid, and partial volume (PV). Each PV image voxel is assumed to be an arbitrary combination of neighboring pure tissues. The PET image is then modeled as a piece-wise smooth function through a Gibbs prior. Assume that the image intensity of each homogeneous tissue region or partial volume region is governed by a thin-plate energy function. We apply first- and second-order edge detection techniques to estimate region boundaries, and then categorize these boundaries based on the tissue types adjacent to each boundary. Rather than use the binary processes representation region boundaries such as weal-plate mode, we adopt a controlled- continuity approach to influence boundary formation. The rationale is that while the first-order edge detection can capture the jumps between two different pure regions, the second-order one can capture the crease connecting a pure region to partial volume region. As we transition from homogeneous to partial volume regions, we enforce zero-th order continuity. Discontinuities in intensity are allowed only a transitions between two different homogeneous regions. We refer to this model as a modified weak-plate model with controlled continuity. We present the result of a computer simulated phantom study in which partial volume effects are explicitly modeled. Results indicate that we obtain superior region of interest quantization using this approach in comparison to a partial volume correction method that has previously been proposed for quantitation using filtered back-projection images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the estimation of a 3D compact homogeneous object from a few number of its gamma ray tomographic projections. This problem is encountered in nondestructive testing applications, in which the number of projections is very limited. We model this shape by a deformable polyhedron, which we estimate directly from the data. The coordinates of the vertices of the polyhedral shape are modeled as a first order vectorial Markov random field and estimated in the Bayesian MAP estimation framework. The energy functional is not convex, hence its minimization requires the use of a stochastic scheme strategy. To reduce the computational cost of the estimate and to propose a practical method, a multiresolution approach is considered in which the number of vertices of the polyhedron is increased as the resolution becomes finer and finer. The algorithm is initialized by a convex polyhedron at a very coarse resolution. Then, at each finer resolution the Bayesian criterion is optimized in the neighborhood of the solution obtained previously. Some simulation results in 2D and 3D cases illustrate the performances of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Emission Computed Tomography (ECT) is widely applied in medical diagnostic imaging, especially to determine physiological function. The available set of measurements is,however, often incomplete and corrupted, and the quality of image reconstruction is enhanced by the computation of a statistically optimal estimate. We present here a numerical method of ECT image reconstruction based on a Taylor series quadratic approximation to the usual Poison log-likelihood function. The quadratic approximation yields simplification in understanding and manipulating Poisson models. We introduce an algorithm similar to global Newton methods which updates the point of expansion a limited number of time sand we give quantitative measures of the accuracy of the reconstruction. The result show little difference in quality from those obtained with the exact Poisson model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Penalized-likelihood method using Bayesian smoothing priors have formed the core of the development of reconstruction algorithms for emission tomography. In particular, there has been considerable interest in edge-preserving prior models, which are associated with smoothing penalty functions that are nonquadratic functions of nearby pixel differences. Our early work used a higher-order nonconvex prior that imposed piecewise smoothness on the first derivative of the solution to achieve result superior to those obtained using a conventional nonconvex prior that imposed piecewise smoothness on the zeroth derivative. In spite of several advantages of the higher-order model - the weak plate, its use in routine applications has been hindered by several factors, such as the computational expenses to the on convexity of its penalty function and the difficult in the selection of hyperparameters involved in the model. We note that, by choosing a penalty function which is nonquadratic but is still convex, both the problem of nonconvexity involved in some nonquadratic priors and the over smoothness of edge region sin quadratic priors may be avoided. In this paper, we use a class of 2D smoothing splines with first and second spatial derivatives applied to edge-preserving ability, we use the quantitation of bias/variance and total squared error over noise trials using the Monte Carlo method. Our experimental results show that linear combination of low and high orders of spatial derivatives applied to convex-nonquadratic penalty functions improves the reconstruction in terms of total squared error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method which we call the method of Photon Average Trajectory (PAT). This method provides an image reconstruction in real-time operation mode obtaining the images of superresolution quality. It is shown that time- resolved solutions of unsteady-state radiation transfer and diffusion equations permit to separate out in an explicit from the distribution function P for the probability density for a signal passage through various internal points of studied body while signal propagates from a source point to a detector point. The function P has a characteristic view of Baye's formula. Our analysis has allowed to establish a number of generalized rules of analytical derivation of the function P for highly scattering bodies of arbitrary shapes and for different measurement conditions. It is also shown that, the shadows at the body surface induced by internal macroinhomogeneities can be represented in terms of trajectory integral along the PAT. This approach for the optical tomography using multiply scattered light makes it similar to the conventional computer tomography. In this representation the integrated is the generalized distribution function of internal macroinhomogeneities averaged over the instantaneous values of the distribution P and normalized to the relative velocity of the center movement along the PAT of the distribution P.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Huber's approach to robust estimation is highly fruitful for solving estimation problems with contaminated data or under incomplete information according to the error structure. A simple selection procedure based on robustness to variations of the errors distribution from the assumed one, is proposed. Minimax M-estimator is used to estimate efficiently the parameters and the measurement quantity. A performance deviation criterion is computed by the mean of the Monte Carlo method improved by the Latin Hypercube Sampling. The selection produced is applied to a real measurement problem, grooves dimensioning using Remote Field Eddy Current inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A measurement can be defined as the best way to take advantage of the information given by the observed data. For this purpose, a natural formalism, based on a couple of fundamental equations, is presented. The model, which described the physical phenomenon, should in that sense be wisely built. The parameterization of the model structure is proved to have no effect on the statistical properties of the measurement. Then, a reparameterization of the model structure can be a way of optimizing the inversion process. An optimal reparameterization can be described. It leads to the inversion of a nonlinear system which cannot be solved in a closed form. So, to avoid such a problem, a new suboptimal method is proposed. The results are eventually exemplified on an eddy-current non-destructive testing problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we address the issue of sign language indexation/recognition. The existing tools, like on-like Web dictionaries or other educational-oriented applications, are making exclusive use of textural annotations. However, keyword indexing schemes have strong limitations due to the ambiguity of the natural language and to the huge effort needed to manually annotate a large amount of data. In order to overcome these drawbacks, we tackle sign language indexation issue within the MPEG-7 framework and propose an approach based on linguistic properties and characteristics of sing language. The method developed introduces the concept of over time stable hand configuration instanciated on natural or synthetic prototypes. The prototypes are indexed by means of a shape descriptor which is defined as a translation, rotation and scale invariant Hough transform. A very compact representation is available by considering the Fourier transform of the Hough coefficients. Such an approach has been applied to two data sets consisting of 'Letters' and 'Words' respectively. The accuracy and robustness of the result are discussed and a compete sign language description schema is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose an original and statistical method for he sea-floor segmentation and its classification into five kinds of regions: sand, pebbles, rocks, ridges and dunes. The proposed method is based on the identification of the cast shadow shapes for each sea-bottom type and consists in four stages of processing. Firstly, the input image is segmented into two kinds of regions: shadow and sea-bottom reverberation. Secondly, the image of the contours of the detected cast shadows is partitioned into sub-windows from which a relevant geometrical feature vector is extracted. A pre-classification by a fuzzy classifier is thus required to initialize the third stage of processing. Finally, a Markov Random Field model is employed to specify homogeneity properties of the desired segmentation map. A Bayesian estimate of this map is computed using a deterministic relaxation algorithm. Reported experiments demonstrate that the proposed approach yields promising results to the problem of sea-floor classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a progressive classification scheme for a document layout recognition system using three stages. The first stages, preprocessing, extracts statistical information that may be used for background detection and removal. The second stage, a tree based classified, uses a variable block size and a set of probabilistic rules to classify segmented blocks that are independently classified. The third, state, postprocessing, uses the label map generated in the second state with a set of context rules to label unclassified blocks, trying also to solve some of the misclassification errors that may have been generated during the previous stage. The progressive scheme used in the second and third stages allows the user to stop the classification process at any block size, depending on this requirements. Experiments show that a progressive scheme combined with a set of postprocessing rules increases the percentage of correctly classified blocks and reduces the number of block computations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A scheme for comparative performance analysis of the Bayesian and the Bhattacharyya distance RCE neural network classifiers is presented. The experiments are performed on synthetic and Brodatz textures. The introduction of the new classifier aims at obtaining a better performance in classifying non-stationary multi-texture images. The two classification schemes are assessed on their localized data representation regarding the ability of extracting non- stationary information from the image. Low-resolution data representation is used to reduce the instability produced with the search for a better trade-off between accuracy and spatial classification performances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of the Metropolis Monte Carlo (MMC) and Frieden's Monte Carlo (FMC) deconvolution techniques are compared and analyzed in presence of noise. Two different Gaussian distributed additive noise data sets with Signal- to-noise-ratio (SNR) ranging from 10 to 150 is generated and added to a set of blurred data. The blurred data is obtained by convolving a 24 points input signal that has three peaks with a 21 points wide Gaussian impulse response function. The mean squared error (MSE) is used to compare the two techniques. The MSE is calculated by comparing the reconstructed input signal with the true input signal. The MSEs calculated for each SNR of a given data set is averaged. The averaged MSE for MMC and FMC techniques are potted vs. SNR. Results clearly show that the MMC method is less sensitive to noise. The MSE in reconstructed blurred data performed by MMC is also plotted vs. SNR. Finally, the reconstructed input signal by MMC and FMC techniques are given for SNR of 30.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a blind deconvolusion method to enhance the resolution of images obtained by near-field microwave nondestructive techniques using an open ended rectangular waveguide probe. In fact, we model such imags to be the result of a convolution of the real input images with a point spread function (PSF). This PSF depends mainly on the dimensions of the waveguide, the operating frequency, the nature of the object under test and standoff distance between the waveguide and the object. Unfortunately, it is very difficult to model this PSF from the physical data. For this reason, we consider the problem as a blind deconvolution. The proposed method is based on regularization, and the solution is obtained iteratively, by successive estimation of the input and the PSF. The algorithm is initialized with a PSF obtained from a very simplified physical model. The performance of the proposed method is evaluated on some real data. Several examples of real image enhancement will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ill-posed problem of aerosol distribution determination from a small number of backscatter and extinction lidar measurements was solved successfully via a hybrid method by a variable dimension of projection with B-Splines. Numerical simulation result with noisy data at different measurement situations show that it is possible to derive a reconstruction of the aerosol distribution only with 4 measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Following a PDE-based formulation of low-level vision, recent works have attempted to cast classical mathematical morphology into the axiomatrix framework of scale-space theory. This effort has led to derive continuous elementary morphological operators and revealed deep connections with the theory of reactive PDEs. Until now, researchers have focused their attention of Euclidean morphology. This article aims at setting up the foundations of differential geodesic mathematical morphology. Specifically, we define multiscale geodesic erosions and dilations, and derive their generating PDEs for arbitrary n-dimensional structuring sets or functions. Geodesic reconstruction then corresponds to steady-states of these equations for particular initial conditions. Geodesic morphological operators are further embedded into a general class of one-parameter operator semigroups, called geodesic scale-space operators. Within this framework, regularized geodesic operators are defined in a natural fashion by augmenting the basic PDEs with a diffusive, scale-space-admissible component. Finally, efficient numerical implementations based on monotonic conservative schemes are presented in details. These developments provide the theoretical basis for PDE-based formulations of watershed segmentation and geodesic skeleton computation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a new image texture segmentation algorithm, HMTseg, based on wavelet-domain hidden Markov tree (HMT) models. The HMT model is a tree-structured probabilistic graph that captures the statistical properties of wavelet coefficients. Since the HMT is particularly well suited to images containing singularities, it provides a good classifier for textures. Utilizing the inherent tree structure of the wavelet HMT and its fast training and likelihood computation algorithms, we perform multiscale texture classification at various scales. Since HMTseg works on the wavelet transform of the image, it can directly segment wavelet-compressed images, without the need for decompression. We demonstrate the performance of HMTseg with synthetic, aerial photo, and document image segmentations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photon-limited image analysis is often hindered by low signal-to-noise ratios. A novel Bayesian multiscale modeling and analysis method is developed in this paper to assist in these challenging situations. In addition to providing a very natural and useful framework for modeling an d processing images, Bayesian multiscale analysis is often much less computationally demanding compared to classical Markov random field models. This paper focuses on a probabilistic graph model called the multiscale hidden Markov model (MHMM), which captures the key inter-scale dependencies present in natural image intensities. The MHMM framework presented here is specifically designed for photon-limited imagin applications involving Poisson statistics, and applications to image intensity analysis are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.