PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
There is growing body of experimental evidence showing that human perception and cognition involves mechanisms that can be adequately modeled by pyramid algorithms. The main aspect of those mechanisms is hierarchical clustering of information: visual images, spatial relations, and states as well as transformations of a problem. In this paper we review prior psychophysical and simulation results on visual size transformation, size discrimination, speed-accuracy tradeoff, figure-ground segregation, and the traveling salesman problem. We also present our new results on graph search and on the 15-puzzle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A problem of long standing in vision research is the recovery of three-dimensional (3D) structure from two-dimensional (2D) images. Work on structure from motion has focused on the recovery of 3D structure from multiple views of feature points like the vertices of a cube. Recent work on the perception of four-dimensional (4D) structures has prompted us to determine the circumstances under which 4D structure can be recovered from multiple views of feature points projected onto 2D images. We present a computational algorithm to solve this problem under three assumptions: 1. the correspondence of each feature point over different views is pre-determined; 2. the 4D object undergoes a rigid motion, and 3. the projection from 4D space to 2D images is a orthographic (parallel) one. Four views of five points are required. The algorithm can be generalized to treat the recovery of nD structure from mD views (1≤m≤n). We give some results concerning the minimum number of points and views that are required to recover nD structure from mD views by this algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most important challenges in understanding expert perception is in determining what information in a complex scene is most valuable (reliable) for a particular task, and how experts learn to exploit it. For the task of parameter estimation given multiple independent sources of data, Bayesian data fusion provides a solution to this problem that involves promoting data to a common parameter space and combining cues weighted by their reliabilities. However, for classification tasks this approach needs to be modified to find the information that most reliably distinguishes between the categories. In this paper we discuss solutions to the problem of determining the task-dependent reliability of data sources both objectively for a Bayesian decision agent, and in terms of the reliability assigned by a human observer from the performance of the observer. Modeling observers as Bayesian decision agents, solutions can be construed as a process of assigning credit to data sources based on their contribution to task performance. Applications of this approach to human perceptual data and the analysis of fMRI data will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since their early application to elliptic partial differential
equations, multigrid methods have been applied successfully to a
large and growing class of problems, from elasticity and
computational fluid dynamics to geodetics and molecular
structures. Classical multigrid begins with a two-grid process.
First, iterative relaxation is applied, whose effect is to smooth
the error. Then a coarse-grid correction is applied, in which the
smooth error is determined on a coarser grid. This error is
interpolated to the fine grid and used to correct the fine-grid
approximation. Applying this method recursively to solve the
coarse-grid problem leads to multigrid.
The coarse-grid correction works because the residual equation is
linear. But this is not the case for nonlinear problems, and
different strategies must be employed. In this presentation we
describe how to apply multigrid to nonlinear problems. There are
two basic approaches. The first is to apply a linearization
scheme, such as the Newton's method, and to employ multigrid for
the solution of the Jacobian system in each iteration.
The second is to apply multigrid directly to the nonlinear problem
by employing the so-called Full Approximation Scheme (FAS). In
FAS a nonlinear iteration is applied to smooth the error. The
full equation is solved on the coarse grid, after which the
coarse-grid error is extracted from the solution. This correction
is then interpolated and applied to the fine grid approximation.
We describe these methods in detail, and present numerical
experiments that indicate the efficacy of them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a nonlinear multigrid approach for imaging the electrical
conductivity of a body, given simultaneous measurements of
d.c. electric currents and voltages at the boundary. The
implementation is done for an output least squares formulation of the
electrical impedance tomography problem. Extensions to a stronger,
variational approach are discussed, as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A variety of new imaging modalities, such as optical diffusion tomography, require the inversion of a forward problem that is modeled by the solution to a three-dimensional partial differential equation. For these applications, image reconstruction can be formulated as the solution to a non-quadratic optimization problem.
In this paper, we discuss the use of nonlinear multigrid methods as both tools for optimization and algorithms for the solution of difficult inverse problems. In particular, we review some existing methods for directly formulating optimization algorithm in a multigrid framework, and we introduce a new method for the solution of general inverse problems which we call multigrid inversion. These methods work by dynamically adjusting the cost functionals at different scales so that they are consistent with, and ultimately reduce, the finest scale cost functional. In this way, the multigrid optimization methods can efficiently compute the solution to a desired fine scale optimization problem. Importantly, the multigrid inversion algorithm can greatly reduce computation because both the forward and the inverse problems are more coarsely discretized at lower resolutions. An application of our method to optical diffusion tomography shows the potential for very large computational savings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An algorithm for the simultaneous 3-D reconstruction of several types of object, where each type of object may possibly have a rotational symmetry, from 2-D projection images, where for each image the type of object imaged, the projection orientation used to create the image, and the location of the object in the image are unknown, is described. The motivating application is the determination of the 3-D structure of small spherical viruses from cryo electron microscopy images. The algorithm is a maximum likelihood estimator which is computed by expectation maximization (EM). Due to the structure of the statistical model, the maximization step of EM can be easily computed but the expectation step requires 5-D numerical
quadrature. The computational burden of the quadratures necessitates
parallel computation and three different implementations of two different types of parallelism have been developed using pthreads (for shared memory processors) and MPI (for distributed memory processors). An example applying one of the MPI implementations, running on a 32 node PC cluster, to experimental images of Flock House Virus with comparison to the x-ray crystal diffraction structure of the virus is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent Pulse-Doppler radar systems, whether used for synthetic aperture imaging or surveillance purposes, generally transmit a coherent pulse train made up of identical pulses. While these pulses may contain complex modulation---for example, linear FM chirps or frequency and phase coding---the fact remains that these pulses are usually identical. In this paper, we consider the potential advantages of pulse-trains made up of pulses that are distinctly different from pulse-to-pulse. In particular, we investigate a signal processing algorithm that provides increased resolution and discrimination through delay-Doppler sidelobe suppression in the region surrounding the mainlobe of the delay-Doppler response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modeling light reflection from rough surfaces is an essential problem in computer graphics, computational vision, and multispectral imaging. Existing methods commonly separate the total reflection into diffuse and specular components, but this leads to the nonphysical arbitrariness in choosing the relative weights for the two components. There also lacks a sufficient model for the self-shadowing effect, which is important for rough surfaces. To eliminate these drawbacks, we propose a new reflection model entirely using physical parameters. The surfaces are assumed homogeneous, isotropic, and microscopically smooth, and their height probability densities are assumed Gaussian. Thus we derive the one-bounce reflection through Fresnel coefficient, self-shadowing factor, and probability function for surface orientation. The shadowing factor is calculated analytically from the statistical properties of a rough surface, including the height probability density and correlation function, and it agrees well with numerical simulation. Since all involved parameters in this model are physical, it can be easily verified with measurement. Besides, as a single term, this model generates a sharp specular highlight when a surface is smooth and shows diffuse behavior when the surface is rough. This advantage will be shown through rendered images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a "source-type" solution to the problem of electrical resistance tomography (ERT). The goal of ERT is to develop a map of the electrical properties in a region of space based on observations of voltages collected at the boundary in response to input DC currents also at the boundary. As with many inverse problems, ERT is both nonlinear and poorly posed. Source-type inverse methods have been proposed in the inverse scattering context as a way of quasi-linearizing the problem. Specifically, inhomogeneities in the
medium are viewed as secondary sources embedded in a homogeneous medium. One can solve a linear inverse source problem to localize these sources (i.e. determine their geometry); however resolving their spatial contrasts quantitatively is not possible under this method. In a sense, the nonlinearity of the original problem is buried
in the amplitude. Our work here is motivated by thefact that use of a source-type formulation has not been considered for ERT to the best of our knowledge. We shall show that the secondary sources for ERT are defined by the inner product of the gradients of true
conductivity and electrical potential. Using this equivalence, the inverse problem is easily transformed into a multi-source inversion. Given the ill-posedness of the ERT problem arising from the inherent low sensitivity of the observed data to changes in the internal conductivity of the medium, the proposed transformation provides
a better description of the effect of inhomogeneities and therefore leads to more efficient inversion techniques. We introduce and discuss one step as well as iterative methods, especially for piecewise constant media. In the iterative method, we make use of a level set formulation and we replace the update of the steepest descent approach by the correlation coefficient between the residual vector and the response of a specified source. Using the same measure, i.e. the correlation coefficient, we introduce a simple single step imaging method. Results of both methods using simulated data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address the problem of providing input to a novel method of interpreting photoelastic data for performing experimental stress analysis of models of engineering components. This method employs conventional photoelastic data relating to the directions of the principal stresses in the specimen (isoclinic data), along with the difference in principal stresses (isochromatic data). Both are used within an inverse boundary element model to reconstruct the load conditions at the model boundary and hence to recover the principal stresses in the specimen without recourse to numerical integration of shear stress gradient. We describe methods of obtaining unwrapped isoclinic and isochromatic phase maps from sequences of images captured within a computer-controlled polariscope. A boundary element model of the specimen, congruent with the isoclinic and isochromatic phase maps, is obtained from an image captured within the polariscope under either traditional lighting conditions or by configuring the polariscope to provide a light field background. Image segmentation reveals the boundary of the specimen, which is then described in terms of simple geometric primitives. Boundary points and geometric descriptions are both used to produce the final boundary element model. The techniques described have been applied to a number of contact specimens; results are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Magnetic resonance imaging (MRI) is used, in addition to its well known medical and biological applications, for the study of a variety of fluid dynamic phenomena. This paper focuses on the MRI imaging
of liquid foams to aid the study of their temporal and spatial dynamics. The three dimensional image reconstruction problem is relatively low SNR, with the ultimate goal of analyzing the foam's structure and its evolution. We demonstrate substantial improvement of image quality with Bayesian estimation using simple edge preserving
Markov random field (MRF) models of the fluid field. In terms of total computation time, speed of convergence of estimates is similar
between gradient based methods and sequential greedy voxel updates, with the former requiring more iterations and the latter requiring more operations per iteration. The paper shows also some preliminary results in the analysis of the reconstructed imagery using a
simple parametric model of foam cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectroscopic imaging (SI) techniques combine the ability of NMR spectroscopy to identify and measure biochemical constituents with the ability of MR imaging to localize NMR signals. The basic imaging technique acquires a set of spatial-frequency-domain samples on a regular grid and takes an inverse Fourier transform of the acquired data to obtain the spatial-domain image. Unfortunately, the time
required to gather the data while maintaining an adequate signal-to-noise ratio (SNR) limits the number of spatial-frequency-domain samples that can be acquired. In this paper, we use a high-resolution MR scout image to obtain edge locations in the sample imaged with MRSI. MRI discontinuities represent boundaries between different tissue types, and these discontinuities are likely to appear in the spectroscopic image as well. We propose a new model that encourages edge formation in the MRSI image reconstruction wherever MR image edges occur. A major difference between our model and previous methods is that an edge found in the MR image need not be confirmed by the data; smoothing is reduced across the edge
if either the MR image or the MRSI data suggests an edge. Simulations and results on in vivo MRSI data are presented that demonstrate the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Region of interest (ROI) quantitation is an important task in emission tomography (e.g., positron emission tomography and single photon emission computed tomography). It is essential for exploring clinical factors such as tumor activity, growth rate, and the efficacy of therapeutic interventions. Bayesian methods based on the maximum {\em a posteriori} principle (or called penalized maximum likelihood methods) have been developed for emission image reconstructions to deal with the low signal to noise ratio of the emission data. Similar to the filter cut-off frequency in the filtered backprojection method, the smoothing parameter of the image prior in Bayesian reconstruction controls the resolution and noise trade-off and hence affects ROI quantitation. In this paper we present an approach for choosing the optimum smoothing parameter in Bayesian reconstruction for ROI quantitation. Bayesian
reconstructions are difficult to analyze because the resolution and noise properties are nonlinear and object-dependent. Building on the recent progress on deriving the approximate expressions for the local impulse response function and the covariance matrix, we derived simplified theoretical expressions for the bias, the variance, and the ensemble mean squared error (EMSE) of the ROI quantitation. One problem in evaluating ROI quantitation is that the truth is often required for calculating the bias. This is overcome by using ensemble distribution of the activity inside the ROI and computing the average EMSE. The resulting expressions allow fast evaluation of the image
quality for different smoothing parameters. The optimum smoothing parameter of the image prior can then be selected to minimize the EMSE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a unified variational framework for tomographic reconstruction of 3-D dynamic objects. We use a geometric scene model, where the scene is assumed to be composed of discrete objects captured by their continuous surface boundaries. Object dynamics are modeled as consisting of separate intensity dynamics and object boundary dynamics. The shape dynamics are incorporated into our variational framework by defining a new distance measure between surfaces based on their signed distance functions, which is an extension of our previous definition of distance between curves. These models are then combined in a unified variational framework which incorporates the observation data, shape and intensity dynamics, and prior information on object spatial smoothness. The object surface and intensity sequences are estimated jointly as the minimizer of the resulting energy function. A coordinate descent algorithm based on surface evolution is developed to solve this nonlinear optimization problem. Efficient level set methods are used to implement the algorithm. This approach evolves the surfaces from their initial position to the final solution and handles topological uncertainties automatically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal in Quasi-Monte Carlo (QMC) is to improve the accuracy of integrals estimated by the Monte Carlo technique through a suitable specification of the sample point set. Indeed, the errors from N samples typically drop as N-1 with QMC, which is much better than the N-1/2 dependence obtained with Monte Carlo estimates based on random point sets. The heuristic reasoning behind selecting QMC point sets is similar to that in halftoning (HT), that is, to spread the points out as evenly as possible, consistent with the desired point density. I will outline the parallels between QMC and HT, and describe an HT-inspired algorithm for generating a sample set with uniform density, which yields smaller integration errors than standard QMC algorithms in two dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we explore the use of a content-adaptive mesh model (CAMM) in the classical problem of image restoration. In the proposed framework, we first model the image to be restored by an efficient mesh representation. A CAMM can be viewed as a form of image representation using non-uniform samples, of which the mesh nodes (i.e., image samples) are adaptively placed according to the local content of the image. The image is then restored through estimating the model parameters (i.e., mesh nodal values) from the data. There are several potential advantages of the proposed approach. First, a CAMM provides a spatially-adaptive regularization framework. This is achieved by the fact that the interpolation basis functions in a CAMM have support strictly limited to only those elements that they are associated with. Second, a CAMM provides an efficient, but accurate, representation of the image, thereby greatly reducing the number of parameters to be estimated. In this work we present some exploratory results to demonstrate the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color device characterization involves deriving a mathematical description of the device response to a known input. This is known as the forward characterization transform. In the final application, this transform must be inverted to generate a mapping that determines the device input required for a desired response. This paper focuses on the inverse characterization transform for hardcopy devices. This can be discussed for two cases:
(1) Devices employing 3 channels
A colorimetrically unique inverse mapping exists provided the input signal is within the achievable domain of the device. When the forward transform is described by an analytic model, the inverse can be obtained by search-based techniques. When the forward transform is obtained empirically, the inverse transform is estimated by 3-D fitting or interpolation methods.
(2) Devices employing > 3 channels.
The inverse mapping is not colorimetrically unique, and therefore ill-posed. Additional constraints must be incorporated to ensure uniqueness. As an example, the case of CMYK printer characterization will be discussed. Constraints via undercolor removal and gray component replacement will be presented. Other methods that explicitly constrain CMYK combinations based on criteria such moire minimization will also be described.
For both cases, the problem of out-of-domain mapping and noise considerations will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Construction of panoramic mosaics from video is well established in
both the research and commercial communities, but current methods
generally perform the time-consuming registration procedure entirely
from the sequence's pixel data. Video sequences usually exist in
compressed format, often MPEG-2; while specialized hardware and
highly-optimized software can often quickly create accurate mosaics
from a video sequence's pixels, these products do not make efficient
use of all information available in a compressed video stream. In
particular, MPEG video files generally contain significant information
about global camera motion in their motion vectors. This paper
describes how to exploit the motion vector information so that global
motion can be estimated extremely quickly and accurately, which leads
to accurate panoramic mosaics. The major obstacle in generating
mosaics with this method is variable quality of MPEG motion vectors,
both within a stream from a particular MPEG encoder and between
streams compressed with different encoders. The paper discusses
methods of robustly estimating global camera motion from the observed
motion vectors, including the use of least absolute value estimators,
variable model order for global camera motion, and motion vector
weighting depending on their estimated accuracy. Experimental results
are presented to demonstrate the performance of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-frame super-resolution restoration algorithms commonly utilize a linear observation model relating the recorded images to the unknown restored image estimates. Working within this framework, we demonstrate a method for generalizing the observation model to incorporate spatially varying point spread functions and general motion fields. The method utilizes results from image resampling theory which is shown to have equivalences with the multi-frame image observation model used in super-resolution restoration. An algorithm for computing the coefficients of the spatially varying observation filter is developed. Examples of the application of the proposed method are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method for estimating 3D rigid motion parameters from an image sequence of a moving object. The 3D surface measurement is achieved using an active stereovision system composed of a camera and a light projector, which illuminates objects to be analyzed by a pyramid-shaped laser beam. By associating the laser rays and the spots in the 2D image, the 3D points corresponding to these spots are reconstructed. Each image of the sequence provides a set of 3D points, which is modeled by a B-spline surface. Therefore, estimating the motion between two images of the sequence boils down to matching two B-spline surfaces. We consider the matching environment as an optimization problem and find the optimal solution using Genetic Algorithms. A chromosome is encoded by concatenating six binary coded parameters, the three angles of rotation and the x-axis, y-axis and z-axis translations. We have defined an original fitness function to calculate the similarity measure between two surfaces. The matching process is performed iteratively: the number of points to be matched grows as the process advances and results are refined until convergence. Experimental results with a real image sequence are presented to show the effectiveness of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiresolutional decomposition methods can have application in many areas such as in telecommunications, remote sensing, multimedia, signal and image processing. In this work, the two-dimensional (2D) nonseparable complementary filter (CF) banks and perfect reconstruction (PR) structures are presented. Developed for the processing of images, the 2D CF banks and PR structures were designed based on 2D multirate signal processing theory and complementary filters properties. The complementary filters were designed for an alias free decimation and interpolation. The perfect reconstruction conditions were studied for all types of sampling and filters, and although perfect reconstruction is achieved for quincunx sampling and filter, the analysis is aliasing free in all cases. The CF banks performance with images showed that the signal-to-noise ratio keeps high even for the cases where the reconstruction is not perfect. For PR structures, the reconstructed image is done perfectly, but at the cost of lower data compression. Examples of CF banks and PR structures analysis and synthesis with images are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.