In the last few years, several new methods have been developed for the sampling and the exact reconstruction of specific classes of non-bandlimited signals known as signals with finite rate of innovation (FRI). This is achieved by using adequate sampling kernels and reconstruction schemes. An important class of such kernels is the one made of functions able to reproduce exponentials.
In this paper we review a new strategy for sampling these signals which is universal in that it works with
any kernel. We do so by noting that meeting the exact exponential reproduction condition is too stringent
a constraint, we thus allow for a controlled error in the reproduction formula in order to use the exponential reproduction idea with any kernel and develop a reconstruction method which is more robust to noise.
We also present a novel method that is able to reconstruct infinite streams of Diracs, even in high noise
scenarios. We sequentially process the discrete samples and output locations and amplitudes of the Diracs in real-time. In this context we also show that we can achieve a high reconstruction accuracy of 1000 Diracs for SNRs as low as 5dB.
Photoacoustic tomography (PAT) is a hybrid imaging method, which combines ultrasonic and optical imaging modalities, in order to overcome their respective weaknesses and to combine their strengths. It is based on the reconstruction of optical absorption properties of the tissue from the measurements of a photoacoustically generated pressure field. Current methods consider laser excitation, under thermal and stress confinement assumptions, which leads to the generation of a propagating pressure field. Conventional reconstruction tech niques then recover the initial pressure field based on the boundary measurements by iterative reconstruction algorithms in time- or Fourier-domain. Here, we propose an application of a new sensing principle that allows for efficient and non-iterative reconstruction algorithm for imaging point absorbers in PAT. We consider a closed volume surrounded by a measurement surface in an acoustically homogeneous medium and we aim at recovering the positions and the amount of heat absorbed by these absorbers. We propose a two-step algorithm based on proper choice of so-called sensing functions. Specifically, in the first step, we extract the projected positions on the complex plane and the weights by a sensing function that is well-localized on the same plane. In the second step, we recover the remaining z-location by choosing a proper set of plane waves. We show that the proposed families of sensing functions are sufficient to recover the parameters of the unknown sources without any discretization of the domain. We extend the method for sources that have joint-sparsity; i.e., the absorbers have the same positions for different frequencies. We evaluate the performance of the proposed algorithm using simulated and noisy sensor data and we demonstrate the improvement obtained by exploiting joint sparsity.
This paper presents a number of data processing algorithms developed to improve the accuracy of results derived from datasets acquired by a recently designed terahertz handheld probe. These techniques include a baseline subtraction algorithm and a number of algorithms to extract the sample impulse response: double Gaussian inverse filtering, frequency-wavelet domain deconvolution, and sparse deconvolution. In vivo measurements of human skin are used as examples, and a comparison is made of the terahertz impulse response from a number of different skin positions. The algorithms presented enables both the spectroscopic and time domain properties of samples measured in reflection geometry to be better determined compared to previous calculation methods.
Analytic sensing has recently been proposed for source localization from boundary measurements using a generalization
of the finite-rate-of-innovation framework. The method is tailored to the quasi-static electromagnetic
approximation, which is commonly used in electroencephalography. In this work, we extend analytic sensing
for physical systems that are governed by the wave equation; i.e., the sources emit signals that travel as waves
through the volume and that are measured at the boundary over time. This source localization problem is highly
ill-posed (i.e., the unicity of the source distribution is not guaranteed) and additional assumptions about the
sources are needed. We assume that the sources can be described with finite number of parameters, particularly,
we consider point sources that are characterized by their position and strength. This assumption makes
the solution unique and turns the problem into parametric estimation. Following the framework of analytic
sensing, we propose a two-step method. In the first step, we extend the reciprocity gap functional concept to
wave-equation based test functions; i.e., well-chosen test functions can relate the boundary measurements to
generalized measure that contain volumetric information about the sources within the domain. In the second
step-again due to the choice of the test functions - we can apply the finite-rate-of-innovation principle; i.e., the
generalized samples can be annihilated by a known filter, thus turning the non-linear source localization problem
into an equivalent root-finding one. We demonstrate the feasibility of our technique for a 3-D spherical geometry.
The performance of the reconstruction algorithm is evaluated in the presence of noise and compared with the
theoretical limit given by Cramer-Rao lower bounds.
An interpolation model is a necessary ingredient of intensity-based registration methods. The properties of such
a model depend entirely on its basis function, which has been traditionally characterized by features such as its
order of approximation and its support. However, as has been recently shown, these features are blind to the
amount of registration bias created by the interpolation process alone; an additional requirement that has been
named constant-variance interpolation is needed to remove this bias.
In this paper, we present a theoretical investigation of the role of the interpolation basis in a registration
context. Contrarily to published analyses, ours is deterministic; it nevertheless leads to the same conclusion,
which is that constant-variance interpolation is beneficial to image registration.
In addition, we propose a novel family of interpolation bases that can have any desired order of approximation
while maintaining the constant-variance property. Our family includes every constant-variance basis we know
of. It is described by an explicit formula that contains two free functional terms: an arbitrary 1-periodic binary
function that takes values from {-1, 1}, and another arbitrary function that must satisfy the partition of unity.
These degrees of freedom can be harnessed to build many family members for a given order of approximation
and a fixed support. We provide the example of a symmetric basis with two orders of approximation that is
supported over [-3/2, 3/2] this support is one unit shorter than a basis of identical order that had been previously
Probably the most important property of wavelets for signal processing is their multiscale derivative-like behavior
when applied to functions. In order to extend the class of problems that can profit of wavelet-based techniques, we
propose to build new families of wavelets that behave like an arbitrary scale-covariant operator. Our extension is
general and includes many known wavelet bases. At the same time, the method takes advantage a fast filterbank
decomposition-reconstruction algorithm. We give necessary conditions for the scale-covariant operator to admit
our wavelet construction, and we provide examples of new wavelets that can be obtained with our method.
We propose a new orthonormal wavelet thresholding algorithm for denoising color images that are assumed to
be corrupted by additive Gaussian white noise of known intercolor covariance matrix. The proposed wavelet
denoiser consists of a linear expansion of thresholding (LET) functions, integrating both the interscale and
intercolor dependencies. The linear parameters of the combination are then solved for by minimizing Stein's
unbiased risk estimate (SURE), which is nothing but a robust unbiased estimate of the mean squared error
(MSE) between the (unknown) noise-free data and the denoised one. Thanks to the quadratic form of this MSE
estimate, the parameters optimization simply amounts to solve a linear system of equations.
The experimentations we made over a wide range of noise levels and for a representative set of standard
color images have shown that our algorithm yields even slightly better peak signal-to-noise ratios than most
state-of-the-art wavelet thresholding procedures, even when the latters are executed in an undecimated wavelet
representation.
KEYWORDS: Sensors, Data modeling, Inverse problems, Electromagnetism, Electroencephalography, Biomedical optics, 3D modeling, Mathematical modeling, Statistical analysis, Signal to noise ratio
Inverse problems play an important role in engineering. A problem that often occurs in electromagnetics (e.g.
EEG) is the estimation of the locations and strengths of point sources from boundary data.
We propose a new technique, for which we coin the term "analytic sensing". First, generalized measures are
obtained by applying Green's theorem to selected functions that are analytic in a given domain and at the same
time localized to "sense" the sources. Second, we use the finite-rate-of-innovation framework to determine the
locations of the sources. Hence, we construct a polynomial whose roots are the sources' locations. Finally, the
strengths of the sources are found by solving a linear system of equations. Preliminary results, using synthetic
data, demonstrate the feasibility of the proposed method.
KEYWORDS: Wavelets, Space operations, Filtering (signal processing), Signal processing, Chemical elements, Biomedical optics, Wavelet transforms, Signal analysis, Applied mathematics, Denoising
We build wavelet-like functions based on a parametrized family of pseudo-differential operators Lv→ that satisfy some admissibility and scalability conditions. The shifts of the generalized B-splines, which are localized versions of the Green function of Lv→, generate a family of L-spline spaces. These spaces have the approximation order equal to the order of the underlying operator. A sequence of embedded spaces is obtained by choosing a dyadic scale progression a=2i. The consecutive inclusion of the spaces yields the refinement equation, where the scaling filter depends on scale. The generalized L-wavelets are then constructed as basis functions for the orthogonal complements of spline spaces. The vanishing moment property of conventional wavelets is generalized to the vanishing null space element property. In spite of the scale dependence of the filters, the wavelet decomposition can be performed using an adapted version of Mallat's filterbank algorithm.
We propose a generalization of the Cohen-Daubechies-Feauveau (CDF) and 9/7 biorthogonal wavelet families. This is done within the framework of non-stationary multiresolution analysis, which involves a sequence of embedded approximation spaces generated by scaling functions that are not necessarily dilates of one another. We consider a dual pair of such multiresolutions, where the scaling functions at a given scale are mutually biorthogonal with respect to translation. Also, they must have the shortest-possible support while reproducing a given set of exponential polynomials. This constitutes a generalization of the standard polynomial reproduction property. The corresponding refinement filters are derived from the ones that were studied by Dyn et al. in the framework of non-stationary subdivision schemes. By using different factorizations of these filters, we obtain a general family of compactly supported dual wavelet bases of L2. In particular, if the exponential parameters are all zero, one retrieves the standard CDF B-spline wavelets and the 9/7 wavelets. Our generalized description yields equivalent constructions for E-spline wavelets. A fast filterbank implementation of the corresponding wavelet transform follows naturally; it is similar to Mallat's algorithm, except that the filters are now scale-dependent. This new scheme offers high flexibility and is tunable to the spectral characteristics of a wide class of signals. In particular, it is possible to obtain symmetric basis functions that are well-suited for image processing.
We use a comprehensive set of non-redundant orthogonal wavelet transforms and apply a denoising method called SUREshrink in each individual wavelet subband to denoise images corrupted by additive Gaussian white noise. We show that, for various images and a wide range of input noise levels, the orthogonal fractional (α, τ)-B-splines give the best peak signal-to-noise ratio (PSNR), as compared to standard wavelet bases (Daubechies wavelets, symlets and coiflets). Moreover, the selection of the best set (α, τ) can be performed on the MSE estimate (SURE) itself, not on the actual MSE (Oracle). Finally, the use of complex-valued fractional B-splines leads to even more significant improvements; they also outperform the complex Daubechies wavelets.
The approximate behavior of wavelets as differential operators is often considered as one of their most fundamental properties. In this paper, we investigate how we can further improve on the wavelet's behavior as differentiator. In particular, we propose semi-orthogonal differential wavelets. The semi-orthogonality condition ensures that wavelet spaces are mutually orthogonal. The operator, hidden within the wavelet, can be chosen as a generalized differential operator ∂γτ, for a γ-th order derivative with shift τ. Both order of derivation and shift can be chosen fractional. Our design leads us naturally to select the fractional B-splines as scaling functions. By putting the differential wavelet in the perspective of a derivative of a smoothing function, we find that signal singularities are compactly characterized by at most two local extrema of the wavelet coefficients in each subband. This property could be beneficial for signal analysis using wavelet bases. We show that this wavelet transform can be efficiently implemented using FFTs.
Statistical Parametric Mapping (SPM) is a widely deployed tool for detecting and analyzing brain activity from fMRI data. One of SPM's main features is smoothing the data by a Gaussian filter to increase the SNR. The subsequent statistical inference is based on the continuous Gaussian random field theory. Since the remaining spatial resolution has deteriorated due to smoothing, SPM introduces the concept of "resels" (resolution elements) or spatial information-containing cells. The number of resels turns out to be inversely proportional to the size of the Gaussian smoother. Detection the activation signal in fMRI data can also be done by a wavelet approach: after computing the spatial wavelet transform, a straightforward coefficient-wise statistical test is applied to detect activated wavelet coefficients. In this paper, we establish the link between SPM and the wavelet approach based on two observations. First, the (iterated) lowpass analysis filter of the discrete wavelet transform can be chosen to closely resemble SPM's Gaussian filter. Second, the subsampling scheme provides us with a natural way to define the number of resels; i.e., the number of coefficients in the lowpass subband of the wavelet decomposition. Using this connection, we can obtain the degree of the splines of the wavelet transform that makes it equivalent to SPM's method. We show results for two particularly attractive biorthogonal wavelet transforms for this task; i.e., 3D fractional-spline wavelets and 2D+Z fractional quincunx wavelets. The activation patterns are comparable to SPM's.
We show that a multi-dimensional scaling function of order γ (possibly fractional) can always be represented as the convolution of a polyharmonic B-spline of order γ and a distribution with a bounded Fourier transform which has neither order nor smoothness. The presence of the B-spline convolution factor explains all key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multi-scale differentiation property, and smoothness of the basis functions. The B-spline factorization also gives new insights on the stability of wavelet bases with respect to differentiation. Specifically, we show that there is a direct correspondence between the process of moving a B-spline factor from one side to another in a pair of biorthogonal scaling functions and the exchange of fractional integrals/derivatives on their wavelet counterparts. This result yields two "eigen-relations" for fractional differential operators that map biorthogonal wavelet bases into other stable wavelet bases. This formulation provides a better understanding as to why the Sobolev/Besov norm of a signal can be measured from the ℓp-norm of its rescaled wavelet coefficients. Indeed, the key condition for a wavelet basis to be an unconditional basis of the Besov space Bqs(Lp(Rd)) is that the s-order derivative of the wavelet be in Lp.
We present here an explicit time-domain representation of any compactly supported dyadic scaling function as a sum of harmonic splines. The leading term in the decomposition corresponds to the fractional splines that have recently been defined by the authors as a continuous-order generalization of the polynomial splines.
KEYWORDS: Image interpolation, Image quality, Optical filters, Finite impulse response filters, Digital imaging, Electronic filtering, Signal processing, Linear filtering, Wavelets, Digital signal processing
We present a simple but generalized interpolation method for digital images that uses multiwavelet-like basis functions. Most of interpolation methods uses only one symmetric basis function; for example, standard and shifted piecewise-linear interpolations use the "hat" function only. The proposed method uses q different multiwavelet-like basis functions. The basis functions can be dissymmetric but should preserve the "partition of unity" property for high-quality signal interpolation. The scheme of decomposition and reconstruction of signals by the proposed basis functions can be implemented in a filterbank form using separable IIR implementation. An important property of the proposed scheme is that the prefilters for decomposition can be implemented by FIR filters. Recall that the shifted-linear interpolation requires IIR prefiltering, but we find a new configuration which reaches almost the same quality with the shifted-linear interpolation, while requiring FIR prefiltering only. Moreover, the present basis functions can be explicitly formulated in time-domain, although most of (multi-)wavelets don’t have a time-domain formula. We specify an optimum configuration of interpolation parameters for image interpolation, and validate the proposed method by computing PSNR of the difference between multi-rotated images and their original version.
We present a zero-order and twin image elimination algorithm for digital Fresnel holograms that were acquired in an off-axis geometry. These interference terms arise when the digital hologram is
reconstructed and corrupt the result. Our algorithm is based on the Fresnelet transform, a wavelet-like transform that uses basis functions tailor-made for digital holography. We show that in the Fresnelet domain, the coefficients associated to the interference terms are separated both spatially and with respect to the frequency bands. We propose a method to suppress them by selectively thresholding the Fresnelet coefficients. Unlike other methods that operate in the Fourier domain and affect the whole spacial domain, our method operates locally in both space and frequency, allowing for a more targeted processing.
We present complex rotation-covariant multiresolution families aimed for image analysis. Since they are complex-valued functions, they provide the important phase information, which is missing in the discrete wavelet transform with real wavelets. Our basis elements have nice properties in Hilbert space such as smoothness of fractional order α ε R+. The corresponding
filters allow a FFT-based implementation and thus provide a fast algorithm for the wavelet transform.
We present a numerical two-step reconstruction procedure for digital off-axis Fresnel holograms. First, we retrieve the amplitude and phase of the object wave in the CCD plane. For each point we solve a weighted linear set of equations in the least-squares sense. The algorithm has O(N) complexity and gives great flexibility. Second, we numerically propagate the obtained wave to achieve proper focus. We apply the method to microscopy and demonstrate its suitability for the real time imaging of biological samples.
KEYWORDS: Wavelets, Holograms, Digital holography, Signal processing, Fourier transforms, 3D image reconstruction, Diffraction, Convolution, CCD cameras, Signal detection
We present a new class of wavelet bases---Fresnelets---which is obtained by applying the Fresnel transform operator to a wavelet basis of L2. The thus constructed wavelet family exhibits properties that are particularly useful for analyzing and processing optically generated holograms recorded on CCD-arrays. We first investigate the multiresolution properties (translation, dilation) of the Fresnel transform that are needed to construct our new wavelet. We derive a Heisenberg-like uncertainty relation that links the localization of the Fresnelets with that of the original wavelet basis. We give the explicit expression of orthogonal and semi-orthogonal Fresnelet bases corresponding to polynomial spline wavelets. We conclude that the Fresnel B-splines are particularly well suited for processing holograms because they tend to be well localized in both domains.
Compact support is undoubtedly one of the wavelet properties that is given the greatest weight both in theory and applications. It is usually believed to be essential for two main reasons : (1) to have fast numerical algorithms, and (2) to have good time or space localization properties. Here, we argue that this constraint is unnecessarily restrictive and that fast algorithms and good localization can also be achieved with non-compactly supported basis functions. By dropping the compact support requirement, one gains in flexibility. This opens up new perspectives such as fractional wavelets whose key parameters (order, regularity, etc...) are tunable in a continuous fashion. To make our point, we draw an analogy with the closely related task of image interpolation. This is an area where it was believed until very recently that interpolators should be designed to be compactly supported for best results. Today, there is compelling evidence that non-compactly supported interpolators (such as splines, and others) provide the best cost/performance tradeoff.
In this paper, we present different solutions for improving spline-based snakes. First, we demonstrate their minimum curvature interpolation property, and use it as an argument to get rid of the explicit smoothness constraint. We also propose a new external energy obtained by integrating a non-linearly pre-processed image in the closed region bounded by the curve. We show that this energy, besides being efficiently computable, is sufficiently general to include the widely used gradient-based schemes, Bayesian schemes, their combinations and discriminant-based approaches. We also introduce two initialization modes and the appropriate constraint energies. We use these ideas to develop a general snake algorithm to track boundaries of closed objects, with a user-friendly interface.
We formulate the tomographic reconstruction problem in a variational setting. The object to be reconstructed is considered as a continuous density function, unlike in the pixel-based approaches. The measurements are modeled as linear operators (Radon transform), integrating the density function along the ray path. The criterion that we minimize consists of a data term and a regularization term. The data term represents the inconsistency between applying the measurement model to the density function and the real measurements. The regularization term corresponds to the smoothness of the density function. We show that this leads to a solution lying in a finite dimensional vector space which can be expressed as a linear combination of generating functions. The coefficients of this linear combination are determined from a linear equation set, solvable either directly, or by using an iterative approach. Our experiments show that our new variational method gives results comparable to the classical filtered back-projection for high number of measurements (projection angles and sensor resolution). The new method performs better for medium number of measurements. Furthermore, the variational approach gives usable results even with very few measurements when the filtered back-projection fails. Our method reproduces amplitudes more faithfully and can cope with high noise levels; it can be adapted to various characteristics of the acquisition device.
Ruttiman et al. Have proposed to use the wavelet transform for the detection and localization of activation patterns in functional magnetic resonance imaging (fMRI). Their main idea was to apply a statistical test in the wavelet domain to detect the coefficients that are significantly different form zero. Here, we improve the original method in the case of non-stationary Gaussian noise by replacing the original z-test by a t-test that takes into account the variability of each wavelet coefficient separately. The application of a threshold that is proportional to the residual noise level. After the reconstruction by an inverse wavelet transform, further improves the localization of the activation pattern in the spatial domain.
Wavelets and radial basis functions (RBF) are two rather distinct ways of representing signals in terms of shifted basis functions. An essential aspect of RBF, which makes the method applicable to non-uniform grids, is that the basis functions, unlike wavelets, are non-local-in addition, they do not involve any scaling at all. Despite these fundamental differences, we show that the two types of representation are closely connected. We use the linear splines as motivating example. These can be constructed by using translates of the one-side ramp function, or, more conventionally, by using the shifts of a linear B-spline. This latter function, which is the prototypical example of a scaling function, can be obtained by localizing the one-side ramp function using finite differences. We then generalize the concept and identify the whole class of self-similar radial basis functions that can be localized to yield conventional multiresolution wavelet bases. Conversely, we prove that, for any compactly supported scaling function, there exist a one-sided central basis function that spans the same multiresolution subspaces. The central property is that the multiresolution bases are generated by simple translation without any dilation.
We propose to design the reduction operator of an image pyramid so as to minimize the approximation error in the lp sense where p can take non-integer values. The underlying image model is specified using arbitrary shift- invariant basis functions such as splines. The solution is determined by an iterative optimization algorithm, based on digital filtering. Its convergence is accelerated by the use of first and second derivatives. For p equals 1, our modified pyramid is robust to outliers; edges are preserved better than in the standard case where p equals 2. For 1 < p < 2, the pyramid decomposition combines the qualities of l1 and l2 approximations. The method is applied to edge detection and its improved performance over the standard formulation is determined.
A filterbank decomposition can be seen as a series of projections onto several discrete wavelet subspaces. In this presentation, we analyze the projection onto one of them-- the low-pass one, since many signals tend to be low-pass. We prove a general but simple formula that allows the computation of the l2-error made by approximating the signal by its projection. This result provides a norm for evaluating the accuracy of a complete decimation/interpolation branch for arbitrary analysis and synthesis filters; such a norm could be useful for the joint design of an analysis and synthesis filter, especially in the non-orthonormal case. As an example, we use our framework to compare the efficiency of different wavelet filters, such as Daubechies' or splines. In particular, we prove that the error made by using a Daubechies' filter downsampled by 2 is of the same order as the error using an orthonormal spline filter downsampled by 6. This proof is valid asymptotically as the number of regularity factors tends to infinity, and for a signal that is essentially low- pass. This implies that splines bring an additional compression gain of at least 3 over Daubechies' filters, asymptotically.
We extend Schoenberg's B-splines to all fractional degrees (alpha) > -1/2. These splines are constructed using linear combinations of the integer shifts of the power functions x(alpha ) (one-sided) or x(alpha )* (symmetric); in each case, they are (alpha) -Hoelder continuous for (alpha) > 0. They satisfy most of the properties of the traditional B-splines; in particular, the Biesz basis condition and the two-scale relation, which makes them suitable for the construction of new families of wavelet bases. What is especially interesting from a wavelet perspective is that the fractional B-splines have a fractional order of approximately ((alpha) + 1), while they reproduce the polynomials of degree [(alpha) ]. We show how they yield continuous-order generalization of the orthogonal Battle- Lemarie wavelets and of the semi-orthogonal B-spline wavelets. As (alpha) increases, these latter wavelets tend to be optimally localized in time and frequency in the sense specified by the uncertainty principle. The corresponding analysis wavelets also behave like fractional differentiators; they may therefore be used to whiten fractional Brownian motion processes.
KEYWORDS: Statistical analysis, Functional magnetic resonance imaging, Brain, Scanning probe microscopy, Gaussian filters, Image filtering, Signal to noise ratio, Data modeling, Neuroimaging, Digital filtering
Functional magnetic resonance imaging is a recent technique that allows the measurement of brain metabolism (local concentration of deoxyhemoglobin using BOLD contrast) while subjects are performing a specific task. A block paradigm produces alternating sequences of images (e.g., rest versus motor task). In order to detect and localize areas of cerebral activation, one analyzes the data using paired differences at the voxel level. As an alternative to the traditional approach which uses Gaussian spatial filtering to reduce measurement noise, we propose to analyze the data using an orthogonal filterbank. This procedure is intended to simplify and eventually improve the statistical analysis. The system is designed to concentrate the signal into a fewer number of components thereby improving the signal-to- noise ratio. Thanks to the orthogonality property, we can test the filtered components independently on a voxel-by- voxel basis; this testing procedure is optimal for i.i.d. measurement noise. The number of components to test is also reduced because of down-sampling. This offers a straightforward approach to increasing the sensitivity of the analysis (lower detection threshold) while applying the standard Bonferroni correction for multiple statistical tests. We present experimental results to illustrate the procedure. In addition, we discuss filter design issues. In particular, we introduce a family of orthogonal filters which are such that any integer reduction m can be implemented as a succession of elementary reductions m1 to mp where m equals m1...mp is a prime number factorization of m.
We present new quantitative results for the characterization of the L2-error of wavelet-like expansions as a function of the scale a. This yields an extension as well as a simplification of the asymptotic error formulas that have been published previously. We use our bound determinations to compare the approximation power of various families of wavelet transforms. We present explicit formulas for the leading asymptotic constant for both splines and Daubechies wavelets. For a specified approximation error, this allows us to predict the sampling rate reduction that can obtained by using splines instead Daubechies wavelets. In particular, we prove that the gain in sampling density (splines vs. Daubechies) converges to (pi) as the order goes in infinity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.