PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
It is often admitted that a static with more inputs than outputs cannot be blindly identified, that is, identified only from the observation of its outputs, and without any a priori knowledge on the source statistics but their independence.By resorting to high-order statistics, it turns out that static MIMO systems with fewer outputs than inputs can be identified, as demonstrated in the present paper. The principle, already described in a recent theoretical paper, had not yet been applied to a concrete blind identification problem. Here, in order to demonstrate its feasibility, the procedure is detailed in the case of a 2-sensor 3-source mixture; a numerical algorithm is devised, that blindly identifies a 3-input 2-output mixture. Computer results show its behavior as a function of the data length when sources are QPSK-modulated signals, widely used in digital communications. Then another algorithm is proposed to extract the 3 sources from the 2 observations, once the mixture has been identified. Contrary to the first algorithm, this one assumes that the sources have a known discrete distribution. Computer experiments are run in the case of three BPSK sources in presence of Gaussian noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most recent efforts on blind channel estimation and interference suppression for CDMA systems have concentrated on Direct-SEquence CDMA employing linear modulation. Several existing CDMA systems however, do not conform to the above model as they employ orthogonal modulation. For example, the reverse channel of the IS-95 cellular system employs 64-ary Walsh orthogonal modulation as a means of expanding the signal's bandwidth. Unfortunately, orthogonal modulation destroys the linear, time invariant property of the baseband equivalent overall channel model and severely limits the applicability of common linear blind equalization techniques. In this paper, the structure of the Walsh codes is exploited in order to derive chip-rate receivers and channel estimation algorithms. It turns out that a partial pilot signal may be identified in the Walsh code structure, casting the problem as a semi-blind channel estimation task. Based on this observation, batch and adaptive methods are derived to identify the chip-rate channel. The design of optimal receivers based on the channel estimates is also studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Channel equalization and identification appear as key issues in improving wireless communications. It is known that the linearization of the GMSK modulation leads to a continuous phase OQPSK which can be considered as a Minimum SHift Keying (MSK) modulation. Thus methods of equalization and identification when the channel is input by MSK modulated signals is worth to look at. Most algorithms consider MSK signals as two independent white binary PAM staggered signals; this is not the case in our approach. Here, MSK signals are seen, after sampling at baud rate, as colored complex discrete signals. Even if this view of MSK modulation is quite simple, it has never been utilized with purpose of blind equalization. The particular statistics of such signals are studied, yielding an original closed-form analytical solution to blind equalization, both in the monovariate case and in the source separation problem. Simulations show a good behavior of the algorithms in terms of bit error rate as a function of SNR, both in the case of blind equalization and source separation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of blind equalization of MIMO communications channels is considered using the second order statistics of the data. Such models arise when a single receiver data from multiple sources is fractionally sampled, or when an antenna array is used with or without fractional sampling. We focus on direct design of finite-length MMSE blind equalizers. We allow infinite impulse response channels. Our approaches also work when the 'subchannel' transfer functions have common zeros so long as the common zeros are minimum-phase zeros. We only require that the there exist a causal, stable left inverse to the MIMO transfer function and that the leading coefficient matrix of the MIMO channel impulse response have its rank equal to the number of sources. The channel length or model orders need not be known. The sources are recovered up to a unitary mixing matrix and are further 'unmixed' using higher- order statistics of the data. An illustrative simulation example is provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Within the framework of transmitter induced cyclostationarity, we propose to analyze a new scheme called the repetition-modulation (RM) scheme, which is a mixture of ideas proposed by Tsatsanis and Giannakis on the one hand, Chevreuil and Loubaton on the other hand. The transformation at the emitter is such that the receiver has the structure of a two-sensor model, in which the two FIR entries are, in the frequency domain, shifted by a parameter (alpha) . This key property is shown to make possible the blind identification of the unknown channel. A subspace-like method is posed, and the statistical performance computed. Apart from this aspect, the capacity of the system is given in a closed-form as a function of (alpha) , and it is shown that, for a large range of this parameter, the capacity of the RM is close to the one of the standard system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A least square smoothing (LSS) approach is presented for the blind estimation of multi-input multiple-output finite impulse response system. By exploiting the isomorphic relation between the input and output subspaces and the code sequences, this geometrical approach identifies the channel from a specially formed least squares smoothing error of the channel output. LSS has the finite sample convergence property, i.e., in the absence of noise, the channel is perfectly estimated with only a finite number of data samples. Referred to as the adaptive least squares smoothing algorithm, the adaptive implementation has a fast convergence rate. A-LSS is order recursive, and can be implemented using lattice filter and systolic array. It has the advantage that, when the channel order varies, channel estimates can be obtained without structural change of the implementation. For uncorrelated input sequence, the proposed algorithm performs direct deconvolution as a byproduct.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computationally efficient algorithms which perform speech activity detection have significant potential economic and labor saving benefit, by automating an extremely tedious manual process. In many applications, it is desirable to extract intervals of speech which are obtained by segments of other signal types. In the past, algorithms which successfully discriminate between speech and one specific other signal type have been developed. Frequently, these algorithms fail when the specific non-speech signal is replaced by a different non-speech discrimination problem. Typically, several signal specific discriminators are blindly combined with predictable negative results. Moreover, when a large number of discriminators are involved, dimensions reduction is achieved using Principal Components, which optimally compresses signal variance into the fewest number of dimensions. Unfortunately, these new coordinates are not necessarily optimal for discrimination. In this paper we apply graphical tools to determine a set of discriminators which produce excellent speech vs. non-clustering, thereby eliminating the guesswork in selecting good feature vectors. This cluster structure provides a basis for a general multivariate speech vs. non-speech discriminator, which compares very favorably with the TALKATIVE speech extraction algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Searching for wideband short duration chirps is an important issue is spectrum surveillance. We propose a method and apparatus, inspired by optical tomography, by which a 1D signal is converted to a 2D image. This image has the remarkable property that it may disclose discernible structure. A chirp in additive white Gaussian noise, even undersampled, may be detected. The process is inherently linear and may be easily implemented by parallel processing or through the construction of an opto-electronic device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an application of formal mathematics to create a high performance, low power architecture for time-frequency and time-scale computations implemented in asynchronous circuit technology that achieves significant power reductions and performance enhancements over more traditional approaches. Utilizing a combination of concepts from multivariate signal processing and asynchronous circuit design, a case study is presented dealing with a new architecture for the fast Fourier transform, an algorithm that requires globally shared results. Then, the generalized distributive law is presented an important paradigm for advanced asynchronous hardware design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We generalize the concept of the autocorrelation function to arbitrary physical variables and show how it can be used to define a local autocorrelation function. Using the local autocorrelation function we develop a new method to generate densities for arbitrary physical quantities. In addition, we show that the generalized autocorrelation function can be used to characterize functions with respect to a physical property.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The question of what the joint and conditional time-frequency moments are in terms of the signal and spectral amplitudes and phases is considered. From these moments, a time- frequency density can be constructed using the method of maximum entropy. This technique is used to assess the plausibility of expressions recently derived for low-order conditional and joint moments.ALso investigate is whether or not there is a lower bound on the local time- bandwidth product of a time-frequency density, as there is for the global case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is mathematically convenient to consider both positive nd negative frequencies in signal representation. This idea is critically important to time-frequency analysis. Usually, however one is presented with real signals. It is also well known that the analytic signal is formed using the Hilbert transform. Some interesting and potentially useful relationships are developed for a signal and its Hilbert transform. Some interesting and potentially useful relationships are developed for a signal and its Hilbert transform in this paper. Cross-Hilbert time-frequency distributions (TFDs) between a signal and its Hilbert transform. The relationships between TFDs of signals and cross-Hilbert TFDs of signals are examined. It is shown how interactions between these TFDs yield results which are confined to either the positive or negative frequency planes. Some results which may seem counterintuitive are pointed out. Finally, some interesting results which use the concepts developed for separation of mixed signals by manipulating the associated TFDs are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the key problems in high resolution, time-varying spectral analysis is the suppression of interference terms which can obscure the true location of auto components in the resulting time-frequency distribution (TFD). Commonly used reduced interference distributions tackle the problem with a properly chosen 2D low pass filter (kernel). A recently published novel approach uses alternative means to achieve the desired goal. The idea of the new method is to obtain an estimate of the cross terms form a given prior distribution based on the magnitude and location of its negative components. The estimate is constructed via an iterative projection method that guarantees that the resulting distribution is positive and satisfies the marginals. Even though the marginals are usually a desirable property of TFDs in general, they can impose an undesirably strong constraint on positive TFDs in particular. For these cases it is thus beneficial to relax the marginals-constraints. In this paper we present a new method that does not require the incorporation of this constraint and thus leads to positive TFDs with reduced interference terms but without the restrictions due to the marginals.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A deconvolution technique to estimate the evolutionary spectrum (ES) of nonstationary signals by deconvolving the blurring effects of the kernel function from bilinear time frequency distributions (TFD) is presented. The resulting ES has desirable properties such as positivity, higher resolution, higher concentration in time-frequency. The proposed algorithm is computationally more efficient compared to the recently proposed entropy based deconvolution method. Unlike the entropy method the new algorithm can be adapted to deconvolve TFDs other than the spectrogram.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a new class of representations of signals in the time-frequency (TF) plane. These representations are complex valued, linear, and satisfy reconstruction conditions in which the signal and its complex spectrum may be uniquely reconstructed from their TF representation. These surfaces are generalizes of 1D linear transforms with which they share many properties. The primary advantage of these representations is that the phase of the surface may be used to recover signal information which is not contained in real TF surfaces. Linearity guarantees that cross-terms normally associated with TF distributions do not exist in these representations. Several examples of invertible surfaces are presented, and it is demonstrated that these surfaces agree with normal intuition. Finally, a method, based on the phase gradient, is proposed as a method of modifying Fourier surfaces to produce representations which are more focused or more concentrated in time and frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new method for musical partial tracking in the context of musical transcription using a time-frequency Kalman filter structure. The filter is based upon a model for the evolution of a partial behavior across a wide range of pitch from four brass instruments. Statistics are computed independently for the partial attributes of frequency and log-power first differences. We present observed power spectral density shapes, total powers, and histograms, as well as least-squares approximations to these. We demonstrate that a Kalman filter tracker using this partial model is capable of tracking partials in music. We discuss how the filter structure naturally provides quality-of-fit information about the data for use in further processing and how this information can be used to perform partial track initiation and termination within a common framework. We propose that a model-based approach to partial tracking is preferable to existing approaches which generally use heuristic rules or birth/death notions over a small time neighborhood. The advantages include better performance in the presence of cluttered data and simplified tracking over missed observations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the nonstationary nature of vibrator notes, standard Fourier analysis techniques may not sufficiently characterize the partials of notes undergoing vibrato. Our study employs the modal distribution, a bilinear time-frequency representation, to analyze vibrato signals. Instantaneous frequency and amplitude values for each partial are extracted using Hilbert techniques applied to local neighborhoods of the time-frequency surface. We consider vibrato in violin and vocal performance. Our study confirms the presence of both amplitude modulation and frequency modulation in the partials of notes generated by each of these instruments, and provides a fine-grained analysis of these variations. In addition, we show that these instantaneous amplitude and frequency estimates can be incorporated into methods for synthesizing signals that perceptually resemble the original sampled sounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digitizing the signals from phased array at the element level is becoming more practicable through advances in technology. Adaptive beamforming schemes such as a sample matrix inversion involve inverting a covariance matrix of dimension equal to the number of spatial channels. Since with an element digitized system there can be several thousand elements it becomes impractical to invert such large matrices. This paper presents a weight computation scheme for a fully digitized system. The clutter returns are firstly range gated and then the adaptive process is broken down into a smaller cascade of stages, or domains. Attenuation of the clutter is then achieved using a spatial Wiener filter in each domain. From a simulation of an airborne radar at a high pulse repetition frequency, examples are given of adapted beampatterns and clutter output doppler spectra within each domain. By choosing alias free domains, a clutter-free output is obtained for the sidelobe clutter scenario investigated. A comparison is then given with the performance of a 32 sub-array system does not have the necessary control of the small scale order of the array weights to spatially null the clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive arrays do not naturally produce low sidelobes, except in the point directions of interferences; an additional constraint of some kind is required to maintain low sidelobes. Four system are considered here, three using soft constraints and one hard constraints. Two of the soft constraint systems, through defined differently, are shown to be equivalent. These two methods require a matrix giving the total sidelobe 'power'; this matrix is derived and discussed. Some simulation result are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Weighted Subspace Fitting (WSF) is a method of estimating signal parameters from a subspace of a matrix of received data. WSF was originally derived using the asymptotic statistics of sample eigenvectors. This paper presents a new approach to deriving statistically optimal for WSF algorithms. The approach uses a formula called a 'subspace perturbation expansion', which shows how the subspaces of a finite-size matrix change when the matrix elements are perturbed. The perturbation expansion is used to derive an optimal WSF cost function for estimating directions of arrival in array signal processing. The resulting cost function is identical to that obtained using asymptotic statistics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Taking the QR factorization of the covariance matrix M equals XHX raised to increasing integer powers is shown to be equivalent to the process of Orthogonal Iteration and to converge upon a diagonal matrix with the eigenvalues of M raised to the corresponding power as its diagonal entries. This is a consequence of M being Hermitian. In addition, whereas the eigenvalues of a matrix are not in general rank-revealing, the eigenvalues of M ar as they are the squares of the singular values of X. In this way the Row-Zeroing approach to rank- revealing QR factorization is not longer defeated by the rank-deficient matrix due to Kahan. A connection is also noted with Chan's RRQR algorithm and a physical interpretation is developed for the function of Chan's permutation matrix. In practice, just a few steps of Orthogonal Iteration coupled with Row-Zeroing appears to be a very effective means of estimating the rank and signal subspace. The analytical error bounds upon the subspace estimate are much improved and as a consequence the diagonal value spectrum is sharpened, making thresholding easier and the R matrix much more naturally rank revealing. Hence it is sufficient to perform Row-Zeroing merely upon M or M2. As a result, insight is also provided into the LMI method favored by Nickel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantification of individual magnetic resonance spectroscopy (MRS) signals modeled as a sum of exponentially damped sinusoids, is possible using interactive nonlinear least-squares fitting methods which provide maximum likelihood parameter estimates or using fully automatic, but statistically suboptical black-box methods. In kinetic experiments consecutive time series of MRS spectra are measured in which some of the parameters are known to remain constant over time. The purpose of this paper is to show how the previously mentioned methods can be extended to the simultaneous processing of all spectra in the time series using this additional information between the spectra. We will show that this approach yields statistically better results than processing the different signals separately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an O(n2 log n) algorithm for finding all the eigenvalues of an n X n complex Hankel matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We briefly review the signal processing architecture of a wireless MEM sensor system for source detection, signal enhancement, localization, and identification. A blind beamformer using only the measured data of randomly distributed sensor to form a sample correlation matrix is proposed. The maximum power collection criterion is used to obtain array weights from the dominant eigenvector of the sample correlation matrix. An effective blind beamforming estimation of the time delays of the dominant source is demonstrated. Source localization based on a novel least-squares method for time delay estimation is also given. Array system performance based on analysis, simulation, and measured acoustical/seismic sensor data is presented. Applications of such a system to multimedia, intrusion detection, and surveillance are briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider two general procedures for constructing the nearest approximation of a given matrix by one with any lower rank and any linear structure. Nearness can be measured in any matrix norm. Structured low rank matrices arise in various applications, including image enhancement and model reduction. In practice, the empirical data collected in the matrix often either do not maintain the specified structure or do not induce the desirable rank.It is an important task to search for the nearest structured lower rank approximation of a given matrix. The techniques developed in this paper can easily be implemented for numerical computation. In particular, it is shown that the computations can be approached using efficient optimization packages. The special case of Toeplitz structure using the Frobenius matrix norm is discussed in detail to illustrate the ideas, and numerical test are reported. The procedures developed herein can be generalized to consider a much broader range of approximation problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new algorithm for finding specially structured vectors that span a null space. These vectors arise in direction finding applications for uniform rectangular antenna arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we compare a new regularizing scheme based on the exponential filter function with two classical regularizing methods: Tikhonov regularization and a variant of truncated singular value regularization. The filter functions for the former methods are smooth, but for the latter discontinuous. These regularization methods are applied to the restoration of images degraded by blur and noise. The norm of the noise is assumed to be known, and this allows application of the Morozov discrepancy principle to determine the amount of regularization. We compare the restored images produced by the three regularization methods with optimal values of the regularization parameter. This comparison sheds light on how these different approaches are related.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study is made of a non-smooth optimization problem arising in adaptive-optics, which involves the real-time control of a deformable mirror designed to compensate for atmospheric turbulence and other dynamic image degradation factors. One formulation of this problem yields a functional f(U) equals (Sigma) iequals1n maxj[(UTMjU)ii] to be maximized over orthogonal matrices U for a fixed collection of n X n symmetric matrices Mj. We consider first the situation which can arise in practical applications where the matrices Mj are nearly pairwise commutative. Besides giving useful bounds, results for this case lead to a simple corollary providing a theoretical closed-form solution for globally maximizing f if the Mj are simultaneously diagonalizable. However, even here conventional optimization methods for maximizing f are not practical in a real-time environment. The genal optimization problem is quite difficult and is approached using a heuristic Jacobi-like algorithm. Numerical test indicate that the algorithm provides an effective means to optimize performance for some important adaptive-optics systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new monopulse 3D imaging algorithm based on the Prony's method is presented. The algorithm does not suffer from the down-range resolution limit and cross-range glint error of a traditional Fourier transform based imaging algorithm. Imaging errors due to thermal noise are analyzed for both algorithms. A simulation examples is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging through a stochastically varying distorting medium, such as a turbulent atmosphere, requires multiple short-exposure frames to ensure maximum resolution of object features. Restoration methods are used to extract the common underlying object from the speckle images, and blind deconvolution techniques are required as typically there is little prior information available about either the image or individual PSFs. A method is presented for multiframe restoration based on iterative blind deconvolution, which alternates between restoring the image and PSF estimates. A maximum-likelihood approach is employed via the Richardson-Lucy (RL) method which automatically ensures positively and conservation of the total number of photons. The restoration is accelerated by applying a vector sequence is treated as a 3D volume of data and processed to produce a 3D stack of PSFs and a single 2D image of the object. The problem of convergence to an undesirable solution, such as a delta function, is addressed by weighting the number of image or PSF iterations according to how quickly each is converging, this leads to the asymmetrical nature of the algorithm. Noise artifacts are suppressed by using a dampened RL algorithm to prevent over fitting of the corrupted data. Results are presented for real single frame and simulated multiframe speckle imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time adaptive-optics is a means for enhancing the resolution of ground based, optical telescopes beyond the limits previously imposed by the turbulent atmosphere. One approach for linear performance modeling of closed-loop adaptive-optics system involves calculating very large covariance matrices whose components can be represented by sums of Hankel transform based integrals. In this paper we investigate approximate matrix factorizations of discretizations of such integrals. Two different approximate factorizations based upon representations of the underlying Bessel function are given, the first using a series representation due to Ellerbroek and the second an integral representations. The factorizations enable fast methods for both computing and applying the covariance matrices. For example, in the case of an equally spaced grid, it is shown that applying the approximated covariance matrix to a vector can be accomplished using the derived integral-based factorization involving a 2D fast cosine transform and a 2D separable fast multiple method. The total work is then O(N log N) where N is the dimensions of the covariance matrix in contrast to the usual O(N2) matrix-vector multiplication complexity. Error bounds exist for the matrix factorizations. We provide some simple computations to illustrate the ideas developed in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We study the problem of reconstructing a high-resolution image from multiple undersampled, shifted, degraded frames with subpixel displacement errors. The corresponding reconstruction operators H is a spatially variant operator. In this paper, instead of using the usual zero boundary condition, the Neumann boundary condition is imposed on the images. The resulting discretization matrix of H is a block-Toeplitz-Toeplitz-block-like matrix. We apply the preconditioned conjugate gradient (PCG) method with cosine transform preconditioner to solve the discrete problems. Preliminary results how that the image model under the Neumann boundary condition gives better reconstructed high-resolution images than that under the zero boundary condition, and the PCG method converges very fast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image restoration, a separable, spatially variant blurring function has the form k(x, y; s, 1) =ki(x,s)k2(y, t). If this kernel is known, then discretizations lead to a blurring matrix which is a Kronecker product of two matrices of smaller dimension. If k is not known precisely, such a discretization is not possible. In this paper we describe an interpolation scheme to construct a Kronecker product approximation to the blurring matrix from a set of observed point spread functions for separable, or nearly separable, spatially variant blurs. An approximate singular value decomposition is then computed from this Kronecker factorization.
Keywords: Image restoration, Interpolation, Kronecker product, space variant blur, SVD
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the numerical solution of large linear systems, the preconditioned conjugate gradient algorithm can be very effective if one has a good preconditioner. Two distinctly different approaches to preconditioning are discussed for solving systems derived from continuous linear operators of the form K + (alpha) L, where K is a convolution operator, L is a regularization operator, and (alpha) is a small positive parameter. The first approach is circulant preconditioning. The second, less standard, approach is based on a two-level decomposition of the solution space. A comparison of the two approaches is given for a model problem arising in atmospheric image deblurring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In certain applications, the discretization of 2D integral equations can lead to system involving matrices with block Toeplitz-Toeplitz block structure. Iterative Krylov subspace methods are sometimes employed to find regularized solutions to the related 2D discrete ill-posed problems; however, preconditioned which filter noise are needed to speed convergence to regularized solutions. We describe a preconditioning techniques based on the Toeplitz-type structure of the matrix which generalizes the approaches in (1) and (2) to take advantage of symmetry and real arithmetic operations. We use fast sine transforms to transform the original system to a system whose matrix has partially reconstructible Cauchy-like blocks. The preconditioner is a block diagonal, rank m approximation to this matrix, with Cauchy-like blocks each augmented by an identity of appropriate dimension. We note that the initialization cost is in general less than that for the similar 2D preconditioner in (2) which does not take advantage of symmetry. Several examples are given which illustrate the success of our preconditioned methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optimal preconditioners are of paramount importance for cg-like methods since they make them converge superlinearly. In preceding papers, we proved that any preconditioner belonging to partially equimodular spaces is not optimal for multilevel Toeplitz matrices where the aforementioned class of spaces includes all the known and used trigonometric matrix algebras. Here we survey and refine these results by focusing our attention on the more difficult case in which the multilevel Toeplitz matrices are Hermitian.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CORDIC based IIR digital filters are orthogonal filters whose internal computations consist of orthogonal transformations. These filters possess desirable properties for VLSI implementations such as regularity, local connection, low sensitivity to finite word-length implementation, and elimination of limit cycles. Recently, fine-grain pipelined CORDIC based IIR digital filter architectures which can perform the filtering operations at arbitrarily high sample rates at the cost of linear increase in hardware complexity have been developed. These pipelined architectures consists of only Givens rotations and a few additions which can be mapped onto CORDIC arithmetic based processors. However, in practical applications, implementations of GIvens rotations using traditional CORDIC arithmetic are quite expensive. For example, for 16 bit accuracy, using floating point data format with 16 bit mantissa and 5 bit exponent, it will require approximately 20 pairs of shift-add operations for one Givens rotation. In this paper, we propose an efficient implementation of pipelined CORDIC based IIR digital filters based on fast orthonormal (mu) -rotations. Using this method, the Givens rotations are approximated by angel corresponding to orthonormal (mu) -rotations, which are based on the idea of CORDIC and can perform rotation with minimal number of shift-add operations. We present various methods of construction for such orthonormal (mu) -rotations. A significant reduction of the number of required shift-add operations is achieved. All types of fast rotations can be implemented as a cascade of only four basic types of shift-add stages. These stages can be executed on a modified floating-point CORDIC architecture, making the pipelined filter highly suitable for VLSI implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of this paper is to demonstrate how, in a few seconds, a relatively simple ECG monitor, PC and advanced signal processing algorithms could pinpoint microvolts - late potentials - result from an infarct zone in the heart and is used as an indicator in identifying patients prone to ventricular tachycardia which, if left untreated, leads to ventricular fibrillation. We will characterize recorded ECG data obtained from the standard three vector electrodes during exercise in terms of their higher-order statistical features. Essentially we use adaptive LMS- and Kalman-based second- and third-order Volterra filters to model the non- linear low-frequency P and T waves and motion artifacts which might overlap with the QRS complex and lead to false positive QRS detection. We will illustrate the effectiveness of this new approach by mapping out bispectral regions with a strong bicoherence manifestation and showing their corresponding temporal/spatial origins. Furthermore, we will present a few examples of our own application of these non-invasive techniques to illustrate what we see as their promise for analysis of heart abnormality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An on-line audio classification and segmentation system is presented in this research, where audio recordings are classified and segmented into speech, music, several types of environmental sounds and silence based on audio content analysis. This is the first step of our continuing work towards a general content-based audio classification and retrieval system. The extracted audio features include temporal curves of the energy function,the average zero- crossing rate, the fundamental frequency of audio signals, as well as statistical and morphological features of these curves. The classification result is achieved through a threshold-based heuristic procedure. The audio database that we have built, details of feature extraction, classification and segmentation procedures, and experimental results are described. It is shown that, with the proposed new system, audio recordings can be automatically segmented and classified into basic types in real time with an accuracy of over 90 percent. Outlines of further classification of audio into finer types and a query-by-example audio retrieval system on top of the coarse classification are also introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An entirely new set of criteria for the design of kernels for time-frequency representations (TFRs) has been recently proposed. The goal of these criteria is to produce kernels which will enable accurate classification without explicitly defining, a priori,the underlying features that differentiate individual classes. These kernels, which are optimized to discriminate among multiple classes of signal, are referred to as signal class-dependent kernels, or simply class- dependent kernels. Here this technique is applied to the problem of radar transmitter identification. Several modifications to our earlier approach have been incorporated into the processing, and are detailed here. It will be shown that an overall classification rate of 100 percent can be achieved using our new augmented approach, provided exact time registration of the data is available. In practice, time registration can not be guaranteed. Therefore,the robustness of our technique to data misalignment is also investigated. A measurable performance loss is incurred in this case. A method for mitigating this loss by incorporating our class-dependent methodology within the framework of classification trees is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The DVD read-only disk provides high quality storage of audio visual contents, and is especially suitable for the storage of movie contents. Stand-alone consumer playback devices are already available on the market, and wide-spread use of DVD ROM is expected for the PC market, as PC peripheral. In the PC environment, the majority of MPEG1 based layers, such as Video CD players, are software players. Performance of such players is indistinguishable from hardware-based players. However, DVD player that using software only remains a challenging task. To tackle this problem we will describe a DVD player that is based on a new architecture, that allows high quality playback of DVD material. The architecture allows software only playback, as well as hardware assisted DVD playback, while adding little to the total system cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study develops and evaluates a new VHDL-based performance modeling capability for multiprocessor systems. The framework for this methodology involved modeling the following system aspects: processor characterization, and data set size. Initially, all aspects are specified at an abstract level, and eventually become specified at a detailed level through the process of verification and refinement of design assumptions. Processor characterization involves modeling the processor's speed, instruction set, and memory hierarchy. Task modeling is concerned with the execution time and instruction mix of software tasks within the systems Network characterization models bus protocols, topology, and bandwidths. Data set size refers to how much data is represented by the tokens used in the models. In this study, we applied and evaluated this methodology using both 2D and 3D IR search and track (IRST) algorithms. Two different candidate processors were investigated: IBM PowerPC 604 and Texas Instruments TMS320C80. For the 2D IRST algorithm, the abstract and detailed performance modeling results were obtained for both processors using partitioned data and pipelined algorithmic approaches. For the 3D IRST algorithm, abstract performance models for pipelined and parallelized implementations on the PowerPC were developed. These models examined the feasibility of the implementations, the potential risk areas, and laid the groundwork for detailed performance modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses hardware design issues of a mobile receiver for future generation direct sequence CDMA wireless communication systems. In the design of a mobile unit, fixed-point hardware is an attractive alternative because of increased speed, reduced power consumption, and reduced hardware cost. In this paper, we focus on the fixed-point implementation of 'blind' algorithms, wordlength requirements, and the operation count required for implementation of such algorithms, wordlength requirements, and the operation count required for implementation of such algorithms are evaluated. Our results show that the blind maximum likelihood channel estimation along with the blind MMSE detection algorithm can achieve approximately five times improvement in performance over the conventional correlator based receivers. These newer algorithms require slightly higher worklength but similar computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution spectral estimation is the resolution of spectral components that cannot be distinguished in the Fourier transform. This paper presents what is, in effect, an unusual techniques to perform high-resolution spectral estimation. By canceling powerful and obscuring ocean clutter, weak ship targets are exposed in the short-time Fourier transform of a particular kind of radar data. This achieves the same result as high-resolution spectral estimation, but it also preserves the desirable properties of the Fourier transform. A specific clutter-canceling algorithm has been developed and successfully tested on data from the US Navy's Relocatable Over-the-Horizon Radar. A side-by-side comparison of this algorithm with Tufts-Kumerisan forward-backward linear prediction, a popular high-resolution spectral estimator, will be presented. Although both techniques performed well at detecting the ship and estimating its Doppler frequency, the clutter cancellation techniques seems to provide a better estimate of the ship power relative to the peak of the clutter. This property would be useful in target identification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and implementation of a real-time super-resolution imaging processor was conducted as a benchmark for the Rapid Prototyping of Application Specific Signal Processors (RASSP) program. RASSP is a DARPA/Tri-services sponsored program aimed at accelerating the design process for digital electronic systems. The super-resolution subsystem is part of a Semi-Automated Image-intelligence Processing (SAIP) system. The benchmark project required a reduction in the size and increase in performance of the prototype system, which had been implemented on an array of workstations. The RASSP methodology was applied to guide the design process and RASSP-related analysis tools were used to accelerate the software development. The numerical and run-time performance goals were achieved through a combination of software innovations in the imaging algorithm and porting to a high-density commercial off-the-shelf high-performance parallel processor. The selected processor consisted of four Alex Computer boards containing a total of 72 ADSP-21060 signal processors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of our work is to realize the implementation of a real-time high-quality image rotation on FPGA's board. The method we used is based on M. Unser's work and consists in applying a B-spline interpolator. The difficulty of this problem is due to the relatively weak integration capacity of FPGAs. To solve this problem we have searched for determining the minimum number of bits to code the filter while keeping a good accuracy of filtering output. In this article, we remind a few definitions about B-spline functions and we present how we use B- spline interpolation for the image rotation problem. Then, we describe the way we calculate probability density function of the output error in order to determine the filter data coding.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Architectures and Implementations of Computer Arithmetic
High-speed elementary function generation is crucial to the performance of many DSP applications. This paper presents a new Composite Polynomial Method for generating elementary functions using second-order interpolation. The composite polynomial takes the average of two second-degree polynomials with an overlapping subinterval, and features the minimum approximation error the overlapping subinterval. Its implementation uses small memory with minimal additional computational circuitry and represents a competitive design with respect to existing designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the computation of reciprocals, square roots, inverse square roots, and some elementary functions using small tables, small multipliers, and for some functions, a final 'large' multiplication. We propose a method that allows fast evaluation of these functions in double precision arithmetic.The strength of this method is that the same scheme allows the computation of all these functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method, called the value-preserving (VP) method for reducing the amount of work when computing the value of a function at regularly spaced points. The VP method uses the fact that if two argument values x and y have p common digits, then the values f(x) and f(y) computed with an on-line algorithm of delay (delta) have at least p-(delta) common digits. We discuss evaluation of polynomials using the VP method and compare its performance with several traditional techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes how a w-bit prefix-type carry-lookahead adder may be modified to yield more than one result per operation. The modified adder, called a 'flagged prefix adder', adds two 2's-complement numbers to give a sum S equals A + B but returns anyone of the following results: S; S + 1; -(S + 1); -(S + 2) as a function of a set of flag bits derived by the adder concurrently with the addition. Similarly, if the flagged prefix adder preforms the 2's-complement subtraction S equals A-B, the adder may return any one of: S, S- 1, -S, -(S + 1). Hence, the flagged prefix adder may be used to perform 'instant increment' or 'instant complement' operations. The extra logic required by the flagged prefix adder relative to the original prefix adder is 2w gates and w 2-to-1 multiplexers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses recent results in the area of merged arithmetic. The original sum of products technique has been extended to include add-multiply structures and cascaded multiplication. The merged implementations generally are less complex than conventional designs and offer equivalent to improved performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Generally divider algorithms can be separated into two different kinds of algorithms, the multiplicative algorithms (MA) and the iterative digit recurrence algorithms (IDRA). The number of iterations of the MA and IDRA are proportional to log2 and the word-length, respectively. However every iteration of the MA consists of two multiplications and one addition, while the iteration period time of the IDRA only consists of one addition. The IDRA includes the SRT approach which does not require prescaling, and the GST approach which requires prescaling of the operands. The iteration period of the GST is much smaller due to the fact that the GST only examines three bits to predict the next quotient digit. Due to this reason, the overall latency of the GST-divider is shorter. Alternatively, by fixing the latency, the supply voltage of the GST can be reduced, resulting in a low power implementation. Thus, the GST is more suitable for low latency and low power applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Custom computers comprising of a host processor and FPGAs have been proposed to accelerate computationally complex problems. Whilst the FPGA implementation might be considerably faster than its microprocessor counterpart, this performance acceleration can be degraded by the time to reconfigure the FPGA hardware.This paper demonstrates a technique for developing circuits that can reduce the reconfiguration overhead. Circuits for three basic arithmetic functions multiplication, division and square root have been developed using the Xilinx XC6200 reconfigurable FPGA family. Reconfiguration times have been measured by downloading the designs to the VCC HOTWorks custom computing board. A reduction in reconfiguration time of up to 75 percent has been demonstrated using this design approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Run-length limited RLL(0,k) modulation or k-constrained codes are being used in optical and magnetic recording systems. In this paper a new methodology for the construction of highly efficient modulation codes and in particular new enhanced RLL(0,k) coding schemes are presented. The new methodology generates modulation codes computationally simpler than the existing ones, empowered with partial error detection (PED) capability at the demodulator for improved error control reliability. An increased list of constraints is formed, rather than just constraints pertaining to d, k, I only. Even though concatenation of conventional RLL with ECC can reduce the effectiveness of the ECC, especially with a sliding block encoder/decoder subject to error propagation, a concatenated PED capability can boost the outer ECC performance. Note that in current system using low redundancy ECC, the overall rate is mainly determined by the modulation code rate which critically is to be maintained high. RLL/PED code rates of 8/9, 16/17, 24/25 and 32/33 or higher are achievable. The proposed fixed length block decodable RLL/PED, are generalized schemes of the type: n/n + 1(d equals 0, k equals n - 1/I equals n), and n/n + 1(d equals 0, k equals(n/2/I equals n) where n (epsilon) (Zeta) $GREG5. The encoding/decoding/error-control equations and the global k and interleave I-constraints, are expressed as functions of n, so that fixed encoder/decoder/error-control architectures are obtained in terms of any adjustable work length n, and consequently any code rate n/n + 1. They are characterized by computational simplicity irrespective of the codeword length n, or the code rate. Applicability of the proposed methodology in the construction of Maximum Transition Run codes, is also addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Higher data capacity demands and lower interference requirements in the wireless communications arena are exploiting higher carrier frequencies and wider modulation bandwidths. Circuitry which can perform intermediate frequency processing over these more demanding ranges is needed to provide complex signal processing without commercial penalties. Photonics technologies utilizing Bragg Grating Signal Processing (BGSP) can bridge the gap between the very high frequency RF millimeter wave integrated circuit domains at the antenna interface and the CMOS digital signal processor sat the base band frequency interface. The desirable benefits of multiple; tap adaptive finite impulse response (FIR) and infinite impulse response filters and equalizers are well known; however, they are usually the province of digital signal processing and force the sample rates prior to these processors to a higher overall system power consumption level. BGSP provides these functions with discrete taps and digital controls but at the bandwidths usually reserved for RF circuitry because the actual processing occurs at optical frequencies and at wave lengths which are compatible with integrated circuit technologies. The high performance benefits of photonic processing can be realized if the stability control of the Bragg grating is derived from the same metric which induces in photonics its sensitivity to drift. We will present a orthogonally coded tap modulation technique which stabilizes the transfer function of the signal processor and enables significant adaptive IF signal processing to be obtained with very low size, weight, and power. Our demonstration of a photonic proof-of-concept architecture is a reconfigurable multiple tap FIR filter that is dynamically controlled to perform low pass, high pass, band pass, and band stop filters operating over bandwidths of 3 GHz.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Advanced Architectures and Implementations of Computer Arithmetic
We present a wavelet transform implementation approach using a FIR filter bank that uses a Wallace Tree structure for fast multiplication. VHDL models targeted specifically for synthesize have been written for clocked data registers, adders and the multiplier. Symmetric wavelets like Biorthogonal wavelets can be implemented using this design. By changing the input filter coefficients different wavelet decompositions may be implemented. The design is mapped onto the ORCA series FPGA after synthesis and optimization for timing and area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the well known notch filtering technique for interference excision in direct sequence spread spectrum communications is expanded, so as to remove the constraint that the instantaneous frequency (IF) of the jammer must be constant over the filter duration. The time-varying difference equation representation of a polynomial phase signal is used to define a new filter impulse response that can effectively remove a y polynomial phase interfering signal. It is shown that thus approach, when applied to jammers with constant modulus property, is more efficient than the existing excision methods implementing notch excision filtering. This paper focuses specifically on chirp signals, and provides both the receiver SNR and bit error rate curves, thus showing the improved performance over similar approaches based on instantaneous frequency information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this report, we propose combining the total variation denoising method with the high loss wavelet compression for high noise level images. Numerical experiments show that TV- denoising can bring more wavelet coefficients closer to zero thereby making the compression more efficient while the salient features of the image scan still be retained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.