PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The concept of modulation frequency is shown to be a valuable insight into time-frequency transforms for audio coding. A two-dimensional transform, where the second dimension approximately decomposes the audio signal into modulation frequencies, is proposed. This transform, when applied to audio coding, provides high quality at low data rates and adapt gracefully to changes in available bandwidth. It is inherently scalable, meaning that channel conditions can be matched without the need for additional computation. Moreover, it is compact: in subjective tests our algorithm, coded at 32 kilobits/seconds/channel, outperformed MPEG-1 Layer 3 (MP3) coded at 56 kilobits/seconds/channel (both at 44.1 kHz). This potentially useful result motivates the need for further insight into the definition and analysis of modulation frequency. We thus define modulation frequency for a simple narrowband signal, propose a general bilinear framework for detection, and then propose a minimal set of conditions to extend this definition to broadband signals such as audio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Until a few years ago measurement of human skin and analysis of its data was limited to profiles of surface imprints. New measurement devices, based on image processing, allow measurement of whole areas of skin on the living person, i.e. in vivo, today. Therefore a change in analyzing human skin topography takes place why preprocessing of raw measurement data is extended to two dimensions. To characterize the skin and its reaction on external influences, innovator techniques can be used like the regularization dimension, a parameter similar to the fractal dimension. Also new transforms like the wavelet- or wavelet-packet-transform can be used, which divide the signal into different frequency parts, while spatial resolution and directional information of it is preserved. This paper deals after a short introduction with a comparison of classical filtering methods and the wavelet- transform as a new preprocessing algorithm in the second part. After this a characterization of external influences on human skin is done with classical parameters. The wavelet-packet-transform as a new tool is used to analyze the data and the reaction of skin in different frequency bands, to investigate the effects more detailed and to show some advantages of this transform in the third paragraph. A short conclusion sums up the results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use the Wigner distribution to study pulse propagation in a dispersive media and we show that it leads naturally to a particle view. Using the results obtained we develop a simple approximation method that evolves a pulse in time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-frequency techniques have been successfully used in the analysis of non-stationary signals. Several approaches have been proposed that address concerns such as Time-Frequency (TF) resolution and the elimination of cross-terms. In this work, a TF technique based on the use of Spatially Variant Apodization (SVA) is introduced that focuses on the detection of non-stationary signals that consists of several components that have different amplitudes. The SVA approach is applied to the Short-Time Fourier Transform (STFT) to detect small intensity components that are buried in high sidelobes of other components. Resolution using the SVA is better than the resolution obtained using the STFT with non-rectangular windows. Synthesis can be performed using the overlap-add method. Because of the implementation of the SVA, the modified STFT using sidelobe apodization can have good resolution, detect small intensity components, and show no cross terms in the TF plane, given that stationarity can be assumed using an appropriate window length in the STFT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We review recent work on defining the time-frequency moments of a signal. Expressions are given for moments of all orders, in terms of the amplitude and phase of the signal and spectrum. Knowing the time-frequency moments is of interest for a variety of reasons, including their potential utility as features for classification of nonstationary signals, and also because from the moments one can construct the time-varying spectral density, or approximate it using a few moments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a fast and efficient algorithm for automatic detection and estimation of the fundamental frequency F0 of a harmonic time-domain signal. The method is based on differentiation of the short time Fourier transform (STFT) phase, which is implemented as a cross-spectral product. In estimating and isolating the fundamental frequency, several enhancement processes are developed and applied to the TF surface to improve the signal quality. We describe the algorithm in detail and demonstrate the processing gain achieved at each step. In addition, we apply the algorithm to human speech to recover the pitch fundamental F0 and report the evaluation of the algorithm's performance on the Western Michigan vowel corpus.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Signals used in time-frequency analysis are usually corrupted by noise. Therefore, denoising the time-frequency representation is a necessity for producing readable time-frequency images. Denoising is defined as the operation of smoothing a noisy signal or image for producing a noise free representation. Linear smoothing of time-frequency distributions (TFDs) suppresses noise at the expense of considerable smearing of the signal components. For this reason, nonlinear denoising has been preferred. A common example to nonlinear denoising methods is the wavelet thresholding. In this paper, we introduce an entropy based approach to denoising time-frequency distributions. This new approach uses the spectrogram decomposition of time-frequency kernels proposed by Cunningham and Williams.In order to denoise the time-frequency distribution, we combine those spectrograms with smallest entropy values, thus ensuring that each spectrogram is well concentrated on the time-frequency plane and contains as little noise as possible. Renyi entropy is used as the measure to quantify the complexity of each spectrogram. The threshold for the number of spectrograms to combine is chosen adaptively based on the tradeoff between entropy and variance. The denoised time-frequency distributions for several signals are shown to demonstrate the effectiveness of the method. The improvement in performance is quantitatively evaluated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Introduction of Renyi information to time-frequency analysis occurred in 1991, by Williams et al at SPIE. The Renyi measure provides a single objective indication of the complexity of a signal as reflected in its time-frequency representation. The Gabor logon is the minimum complexity signal and its informational value is zero bits. All other signals exhibit increased Renyi information. Certain time-frequency distributions are information invariant, meaning that their Renyi information does not change under time-shift, frequency shift and scale changes. The Reduced Interference Distributions are information invariant. Thus a given signal within that class will always have the same Renyi result. This can be used to survey large data sequences in order to isolate certain types of signals. One application is to extract instances of such a signal from a streaming RID representation. Examples for temporomandibular joint clicks are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We address two classical problems relating to harmonic signals. The first of these is the blind recovery of the carrier of a single sideband-AM communication signal, and the second is the isolation and blind estimation of the fundamental of a time-varying harmonic signal. The methods are based on cross-spectra estimated from the short time Fourier transform, a generalization of the Chinese remainder theorem and joint Fourier and autocorrelation representations of the signal spectrum. These tools are developed, and their utility is demonstrated in the solutions of the two problems. By an additional application of a frequency-lag autocorrelation function, it is demonstrated that the harmonic fundamental can be recovered, even if it is not present in the original spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a method for writing the differential equation for the smoothed Wigner distribution that corresponds to the solution of an ordinary linear differential equation. The method can be applied on any linear ordinary differential equation with constant or time-varying coefficients.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliable monitoring methods are essential for maintaining a high level of quality control in laser welding. In industrial processes, monitoring systems allow for quick decisions on the quality of the weld, allowing for high productions rates and reducing overall cost due to scrap. A monitoring system using infrared, ultraviolet, audible sound, and acoustic emission was implemented for monitoring CO2 laser welds in real-time. The signals were analyzed using time-frequency analysis techniques. The time-frequency distribution using the Choi-Williams kernel was calculated, and the resulting distributions were analyzed using the Renyi information distribution. Results for porosity monitoring showed that an acoustic emission sensor held the most promise with 100% classification in two weld studies. These encouraging results led to a second study for monitoring of weld penetration and in the second case, infrared, ultraviolet, and audible sound showed the most promise with 100% classification for both laboratory and industrial data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optimization of algorithms for self-timed or asynchronous circuits requires specific solutions. Due to the variable-time capabilities of asynchronous circuits, the average computation time should be optimized and not only the worst case of the signal propagation. If efficient algorithms and implementations are known for asynchronous addition and multiplication, only straightforward algorithms have been studied for division. This paper compares several digit-recurrence division algorithms (speed, area and circuit activity for estimating the power consumption). The comparison is based on simulations of the different operators described at the gate level. This work shows that the best solutions for asynchronous circuits are quite different from those used in synchronous circuits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Symmetric table addition methods (STAMs) approximate functions by performing parallel table lookups, followed by multioperand addition. STAMs require significantly less memory than direct table lookups and are faster than piecewise linear approximations. This paper investigates the application of STAMs to the sigmoid function and its derivative, which are commonly used in artificial neural networks. Compared to direct table lookups, STAMs require between 23 and 41 times less memory for sigmoid and between 24 and 46 times less memory for sigmoid's derivative, when the input operand size is 16 bits and the output precision is 12 bits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows the design and the evaluation of on-line arithmetic modules for the most common operators used in DSP applications, using FPGAs as the target technology. The designs are highly optimized for the target technology and the common range of precision in DSP. The results are based on experimental data collected using CAD tools. All designs are synthesized for the same type of devices (Xilinx XC4000) for comparison, avoiding rough estimates of the system performance, and generating a more reliable and detailed comparison of on-line signal processing solutions with other state of the art approaches, such as distributed arithmetic. We show that on-line designs have a hard stand for basic DSP applications that use only addition and multiplication. However, we also show that on-line designs are able to overtake other approaches as the applications become more sophisticated, e.g. when data dependencies exist, or when non constant multiplicands restrict the use of other approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new and efficient number theoretic algorithm for evaluating signs of determinants is proposed. The algorithm uses computations over small finite rings. It is devoted to a variety of computational geometry problems, where the necessity of evaluating signs of determinants of small matrices often arises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an algorithm for implementing correctly rounded exponentials in double-precision floating point arithmetic. This algorithm is based on floating-point operations in the widespread EEE-754 standard, and is therefore more efficient than those using multiprecision arithmetic, while being fully portable. It requires a table of reasonable size and IEEE-754 double precision multiplications and additions. In a preliminary implementation, the overhead due to correct rounding is a 6 times slowdown when compared to the standard library function.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A carry-skip adder is faster than a ripple carry adder and it has a simple structure. To maximize the speed it is necessary to optimize the width of the blocks that comprise the carry skip adder. This paper presents a simple algorithm to select the size of each block. Assuming that each logic gate has a unit delay, the algorithm achieves slightly faster designs for 64 and 128 bit adders than previous methods developed by Guyot, et al. and Kantabutra.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electron-optical image converters (EOIC) have been known to be useful in recording and investigating high-speed processes, nuclear physics experiments, automatic environmental control, medicine etc. In this paper the cathode ray tubes with the cathodoluminescent screen having a sufficiently high level of the radiation temporal coherence (particularly on the basis of rare-earth phosphors) are proposed to be utilized as devices for the dynamic data input into the holographic correlator for realization of TV signal recognition in real time. This approach allows combining both the radiation source and the spatial light modulator functions in one compact device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FPGA components are widely used today to perform various algorithms (digital filtering) in real time. The emergence of Dynamically Reconfigurable (DR) FPGAs made it possible to reduce the number of necessary resources to carry out an image processing application (tasks chain). We present in this article an image processing application (image rotation) that exploits the FPGA's dynamic reconfiguration feature. A comparison is undertaken between the dynamic and static reconfiguration by using two criteria, cost and performance criteria. For the sake of testing the validity of our approach in terms of Algorithm and Architecture Adequacy , we realized an AT40K40 based board ARDOISE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial filtering is a promising method of Radio Frequency Interference (RFI) mitigation in radio interferometric systems, but there are no quantitative results concerning the performance of radio astronomy arrays using this technique. In this paper performance bounds are calculated for sparse arrays, typical in radio astronomy (VLA, WSRT, GMRT). Two parameters are proposed to characterize the benefits of spatial filtering: gain and loss. Calculations show that the adaptive nulling method for the sparse arrays generates a complex spatial structure of gain and loss as the functions of angular distance between a radio source of interest and RFI source. The geometry of the array strongly influences on the performance of the RFI mitigation. This factor should be taken into consideration while designing new radio astronomy arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an initial attempt to calibrate a large, random, sparse, high-frequency, 2-dimensional array using the transmissions from radio stations. A semi-quantitative discussion is presented of various intuitive ideas for calibration, along with samples of typical results from numerical testing using synthetic and real data. First, a theoretical discussion of the effect of calibration errors is provided, in which a distinction is made between mild and severe calibration errors. Then a variety of techniques are suggested for both cases. For mild calibration errors, in which true peaks are still apparent in the spectrum, a simple approach is presented where the information in the true peaks is used to approximate the field at the array, from which the correct calibration can be deduced. This technique will converge to the correct solution with sufficient independent data sets. For severe calibration errors, in which the spectrum contains only speckle, several techniques are proposed to obtain a crude calibration of the array. One technique fits a plane wave to the uncalibrated receiver voltages. Another technique forces or assumes a plane wave at the array and then deduces the error by comparing different data sets. The third technique uses a Monte Carlo approach to generate the calibration weights, and a discussion of the correct interpretation of the results is provided. If this crude initial calibration can reduce the calibration errors to the mild case, then the calibration can continue in a two-step procedure using the techniques for the mild case.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The inverse-free Berlekamp-Massey (BM) algorithm is the simplest technique for Reed-Solomon (RS) code to correct errors. In the decoding process, the BM algorithm is used to find the error locator polynomial with syndromes as the input. Later, the inverse-free BM algorithm is generalized to find the error locator polynomial with given erasure locator polynomial. By this means, the modified algorithm can be used for RS code to correct both errors and erasures. The improvement is achieved by replacing the input of the Berlekamp-Massey algorithm with the Forney syndromes instead of the syndromes. With this improved technique, the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. In this paper, the register transfer language of this modified BM algorithm is derived and the VLSI architecture is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on a frame theoretical formulation of irregular sampling, conditions on sampling points are derive. Our formulation and sampling conditions are applicable to all subspaces (e.g. shift invariant, Gabor, bandlimited and bandpass subspaces). A focus is paid to the implementation of the algorithm in general. The method starts from constructing a pair of frame for a selected subspace, with which a sequence of sampling functions is fabricated based on a set of given irregular sampling points. The (irregular) sampling reconstruction is implemented through a frame-based iterative algorithm, which is guaranteed to converge. A matlab package with a graphic user interface will be provided for users to view demos as well as try their own sampling reconstruction problem. Users are also allowed to construct their own subspaces. Parameters, sampling points and signals may all be entered by users. The program will automatically check for the fulfillment of the sampling conditions and provide a reconstruction of the signal. We believe that this is a useful tool for studies and practices of irregular samplings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The BiCG and QMR methods are well-known Krylov subspace iterative methods for the solution of linear systems of equations with a large nonsymmetric, nonsingular matrix. However, little is known of the performance of these methods when they are applied to the computation of approximate solutions of linear systems of equations with a matrix of ill-determined rank. Such linear systems are known as linear discrete ill-posed problems. We describe an application of the BiCG and QMR methods to the solution of linear discrete ill-posed problems that arise in image restoration, and compare these methods to the conjugate gradient method applied to the associated normal equations and to total variation-penalized Tikhonov regularization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the problem of reconstructing the shape of a convex object from measurements of the area of its shadows in several directions. This type of very weak measurement is sometime referred to as the brightness function of the object, and may be observed in an imaging scenario by recording the total number of pixels where the object's image appears. These types of measurements, collected as a function of viewing angle, are also referred to as lightcurves in the astrophysics community, and are employed in estimating the shape of atmosphere less rotating bodies (e.g. asteroids). We address the problem of shape reconstruction from brightness functions by constructing a least-squares optimization framework for approximating the underlying shapes with polygons in 2-D, or polyhedra in 3-D, from noisy, and possibly sparse measurements of the brightness values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel approach to the problem of exponential decomposition. This method can compute the knots in O(n2) floating-point operations and O(n) storage, where n is the length of the signal sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an O(n2logn) algorithm for finding all the singular values of an n-by-n complex Hankel matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an efficient array processor for downdating the ULVD. The array processor exploits the block structure of the decomposition to determine the effective rank at every step of modification and maintains the rank-revealing nature of the decomposition throughout the steps by tracking exact quantities of Frobenius norms of all three blocks of the lower triangular factor in the decomposition. Thus, a deflation step that is necessary to compute its numerical rank can be often avoided. This feature of the algorithm is particularly attractive to its array processor implementation because the deflation steps usually require some condition estimation, and most of good condition estimators are not well pipelined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of deleting a row from the QR factorization X = UR by Gram-Schmidt techniques is intimately connected to solving the least squares problem (formula available in paper) by classical iterative methods. Past approaches to this problem have focused upon accurate computation of the residual (formula available in paper), thereby finding a vector (formula available in paper) that is orthogonal to U. This work shows that it is also important to accurately compute the vector f and that it must be used in the downdating process to maintain good backward error in the new factorization. New algorithms are proposed based upon this observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new family of algorithms for accurate floating--point computation of the singular value decomposition (SVD) of various forms of products (quotients) of two or three matrices. The main goal of such an algorithm is to compute all singular values to high relative accuracy. This means that we are seeking guaranteed number of accurate digits even in the smallest singular values. We also want to achieve computational efficiency, while maintaining high accuracy. To illustrate, consider the SVD of the product A=BTSC. The new algorithm uses certain preconditioning (based on diagonal scalings, the LU and QR factorizations) to replace A with A'=(B')TS'C', where A and A' have the same singular values and the matrix A' is computed explicitly. Theoretical analysis and numerical evidence show that, in the case of full rank B, C, S, the accuracy of the new algorithm is unaffected by replacing B, S, C with, respectively, D1B, D2SD3, D4C, where Di, i=1,...,4 are arbitrary diagonal matrices. As an application, the paper proposes new accurate algorithms for computing the (H,K)-SVD and (H1,K)-SVD of S.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Adaptive array systems require the periodic solution of the well-known w=R1v equation in order to compute optimum adaptive array weights. The covariance matrix R is estimated by forming a product of noise sample matrices X:R=XHX. The operations-count cost of performing the required matrix inversion in real time can be prohibitively high for a high bandwidth system with a large number of sensors. Specialized hardware may be required to execute the requisite computations in real time. The choice of algorithm to perform these computations must be considered in conjunction with the hardware technology used to implement the computation engine. A systolic architecture implementation of the Givens rotation method for matrix inversion was selected to perform adaptive weight computation. The bit-level systolic approach enables a simple ASIC design and a very low power implementation. The bit-level systolic architecture must be implemented with fixed-point arithmetic to simplify the propagation of data through the computation cells. The Givens rotation approach has a highly parallel implementation and is ideally suited for a systolic implementation. Additionally, the adaptive weights are computed directly from the sample matrix X in the voltage domain, thus reducing the required dynamic range needed in carrying out the computations. An analysis was performed to determine the required fixed-point precision needed to compute the weights for an adaptive array system operating in the presence of interference. Based on the analysis results, it was determined that the precision of a floating-point computation can be well approximated with a 13-bit to 19-bit word length fixed point computation for typical system jammer-to-noise levels. This property has produced an order-of-magnitude reduction in required hardware complexity. A synthesis-based ASIC design process was used to generate preliminary layouts. These layouts were used to estimate the area and throughput of the VLSI QR decomposition architecture. The results show that this QR decomposition process, when implemented into a full-custom design, provides a computation time that is two orders of magnitude faster than a state-of-the-art microprocessor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar imaging traditionally requires extremely large computational resources in order to coherently process data into an image due to the large number of data samples in high bandwidth radars. We propose a methodology to achieve high range and cross range resolution by pre-detection followed by phase integration in the wavelet domain in order to significantly reduce the bandwidth necessary and the computational load of the coherent phase processing. We will then compare this process to FFT based phase integration in terms of computational complexity and show simulated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Satellites orbit the Earth and obtain continuous imagery of the ground below along their orbital path. The quality of satellite images propagating through the atmosphere is affected by phenomena such as scattering and absorption of light, and turbulence, which degrade the image by blurring it and reducing its contrast. The atmospheric Wiener filter, which corrects for turbulence blur, aerosol blur, and path radiance simultaneously, is implemented in digital restoration of Landsat TM (Thematic Mapper) imagery. Digital restoration results of Landsat TM imagery using the atmospheric Wiener filter were presented in the past. Here, a new approach for digital restoration of Landsat TM is presented by implementing a Kalman filter as an atmospheric filter, which corrects for turbulence blur, aerosol blur, and path radiance simultaneously. Turbulence MTF is calculated from meteorological data or estimated if no meteorological data were measured. Aerosol MTF is consistent with optical depth. The product of the two yields atmospheric MTF, which is implemented in both the atmospheric Wiener and Kalman filters. Restoration improves both smallness of size of resolvable detail and contrast. Restorations are quite apparent even under clear weather conditions. Here, restorations results of the atmospheric Kalman filter are presented along with those for the atmospheric Wiener filter. A way to determine which is the best restoration result and how good is the restored image is presented by a visual comparison and by considering several mathematical criteria. In general the Kalman restoration is superior, and inclusion of turbulence blur also leads to slightly improved restoration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mainly digital images are under sampled. It is the same for SPOT digital image satellite. The very meaning is that the instrument is too much powerful for this sampling. The worth side effect is that artifacts (aliasing) are introduced in the image, the good side is that images can be improved if the sampling density is increased. In this paper we use images from the two HRVIR instruments onboard SPOT1-4 satellite to multiply by a factor two the density and the resolution of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An efficient and robust approach to adaptive prediction is presented which uses a local causal area to evaluate a number of individual fixed sub-predictors. Various schemes are utilized to exploit the resulting information, including a rank-order based approach, a two stage adaptive selection technique utilizing median filtering, a technique for adaptive combination and a technique incorporating adaptive selection followed by adaptive combination. The respective selection and combination schemes display superior results for particular image types. When the proposed predictors are coupled with prediction error feedback and adaptive arithmetic coding, they produce results superior to CALIC. To produce a more robust predictor an additional stage based on one of the proposed selection schemes is proposed. The result is an adaptive prediction scheme which effectively utilizes the principles of predictor combination and selection. Different forms of the latter stage predictor are explored, which are shown to improve overall predictor performance. A selection based approach is also demonstrated to be more robust than combination based schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image registration is a very common and important problem in several fields such as medical imaging, computer vision, simulation, etc. The aim of this contribution is to present a new mathematical partial differential equation (PDE)-model for the registration of two-dimensional (2D) and three-dimensional (3D), eventually noisy, images. Estimating the registration between two image data sets is here formulated as a motion estimation and evolution problem. Moreover we shortly review the PDE approaches which originated the proposed model. The model is based on ideas introduced for processing of space-time image sequences. The proposed algorithm can deal with small and large deformations, it also works in presence of noise and it is very fast. Computational results in processing of a variety of images including synthetic and medical images are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a system of vision in real time, allowing to detect automatically the faces presence, to localize and to follow them in video sequences. We verify also the faces identities. These processes are based by combining technique of image processing and methods of neural networks. The tracking is realized with a strategy of prediction-verification using the dynamic information of the detection. The system has been evaluated quantitatively on 8 video sequences. The robustness of the method has been tested on various lightings images. We present also the analysis of complexity of this algorithm in order to realize an implementation in real time on a FPGA based architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A minimum variance space-time (MVST) receiver with an antenna array has recently been proposed as a relatively simple method for designing blind multiuser receivers for code division multiple access (CDMA) systems. For time-varying CDMA systems, it is necessary to update the response vector, which is used to construct the receiver algorithm, of the MV receiver for each symbol interval. In this paper, we propose a new MVST receiver whose the response vector is updated based on the rank-revealing URV-decomposition rather than the eigen decomposition or singular value decomposition (SVD) used in previous MVST receivers. We demonstrate its performance through simulation in varying the number of multipaths and the Rayleigh fading parameters. It is found that the URV-decomposition based MVST receiver is effective for the time-varying CDMA signal, and achieves considerable computational savings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new robust beamforming algorithm that can be used as an on-air on-frequency repeater. The major problem of a repeater is self interference of repeated signal. When we use an antenna array, the LCMV (linearly constrained minimum variance) beamformer would be ideal if the array response vector is perfectly known. However, the LCMV is very sensitive to array imperfection, and it may suppress the desired signal, too if we use an uncalibrated array. To alleviate this problem, many robust beamforming algorithms have been proposed. In this paper we propose another robust beamforming algorithm using cost-function minimization technique. Our algorithm shows good performance and does not require any information except the direction of arrival of the incoming repeating signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Coherent adaptation algorithms are proposed for a beamformer in the WCDMA reverse link. The coherent adaptation algorithm improves the convergence speed by constraining the desired signal component of the filtered output to be always coherent in phase with the reference signal at each iteration. With the coherent constraint, the performance of coherent methods is significantly improved. We present the simulation results that show the various desirable characteristics of coherent methods such as a fast convergence speed, insensitiveness to fading parameters over conventional methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new robust adaptive beamforming algorithm with the structure of the GSC (generalized sidelobe canceller). Similarly to the LCMV (linearly constrained minimum variance) beamformer, the performance of the GSC degrades greatly in the presence of array error. When we do not know the array steering vector exactly, the blocking matrix of the GSC cannot remove the desired signal completely. To overcome this difficulty, we propose to project the weight vector into the interference subspace, and therefore the desired signal residual present in the output of the blocking matrix is also projected into the interference subspace. As a result, the desired signal is not cancelled in the GSC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel concept for very low bit rate video codec. It uses a new hierarchical adaptive structured mesh topology. The proposed video codec can be used in wireless video applications. It uses structures to model the dynamics of the video object where the proposed the adaptive structure splitting significantly reduces the number of bits used for mesh description. Moreover, it reduces the latency of motion estimation and compensation operations. A comprehensive performance study is presented for the proposed mesh-based motion tracking and the commonly used techniques. It shows the superior of the proposed concept compare to the current MPEG techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Music is a sum of several instrumental sounds whose individual fundamental frequencies are based on the musical score. Reversely musical sound contains information about the score, such as the instruments played and their fundamental frequencies. Automatic identification of scores from the musical sound is called the automatic transcription. There are many items to be estimated; the type of instruments, fundamental frequency, and note. Among these, the fundamental frequency estimation problem (FFE) has been widely studied. It is extensively studied for more than thirty years and there are many algorithms for the estimation of mono-phonic sound and poly-phonic sound. In this paper we propose a new estimation method of musical sound using the subspace approach. Our algorithm can be used to estimate poly-phonic and poly-instrumental sounds. This subspace approach is based on the autocorrelation of sounds and the orthogonality property. First, we gather subspaces of various instruments with different fundamental frequency. We define the subspaces as sound manifold. Next, we compare sound manifold and the subspace of measurement musical sound. We use the noise subspace of measurement sound and apply a MUSIC-like algorithm which use the orthogonality property of the signal subspace and the noise subspace. We test our algorithm with MIDI signals and show good identification capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressed video bitstreams require protection from channel errors in a wireless channel. The three-dimensional (3-D) SPIHT coder has proved its efficiency and its real-time capability in compression of video. A forward-error- correcting (FEC) channel (RCPR) code combined with a single ARQ (Automatic-repeat-request) proved to be an effective means for protecting the bitstream. In this paper, the need for ARQ is eliminated by making the 3D SPIHT bitstream more robust and resistant to channel errors. Packetization of the bitstream and the reorganization of these packets to achieve scalability in bit rate and/or resolution in addition to robustness is demonstrated and combined with channel coding to not only protect the integrity of the packets, but also allow detection of packet decoding failures, so that only the cleanly recovered packets are reconstructed. In extensive comparative tests, the reconstructed video is shown to be superior to that of MPEG- 2, with the margin of superiority growing substantially as the channel becomes noisier.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new methodology of joint optimization among source coding, channel coding and error concealment for wireless video transmission. Specifically, a generalized distortion-rate function (GDRF), which reflects the relation between the source coding rates, channel coding rates and the end-to-end distortion consisting of quantization distortion, channel loss distortion compensated by error concealment, is established. Moreover, a classification algorithm with modified equal mean-normalized standard deviation is proposed to employ the GDRF for optimal source-channel bit-allocation. Simulation results manifest that the proposed methodology outperforms conventional joint source-channel coding (JSCC) schemes, with computational complexity being quite acceptable for real time video applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an adaptive genetic block-matching algorithm is proposed for video compression. Specially, some novel adaptive elements, including initialization, parent selection and termination rule, are contrived. Extensive simulations with different test video sequences have confirmed that the proposed algorithm can improve the preciseness of block-matching search, while keeping low computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gaussian distributed OFDM signal with high fluctuation introduces nonlinear distortion, out-of-band radiation due to strict requirement on linearity in power amplifier. In 1997, Muller and Huber proposed partial transmit sequences algorithm based on least PAPR (Peak to Average Power Ratio) in order to reduce the fluctuation. On the other hand, when dividing the sequences into more parts, the computation increases exponentially. Furthermore, this algorithm often focuses on the compression of the highest power, which does not mean that the overall system performance should be improved. This paper proposes a novel partial transmit sequences algorithm based on least clipping noise. Specifically, in the selection of optimal parameters in partial transmit sequences to modify the phases of input symbols, total clipping noise (nonlinear distortion) is considered instead of the highest power. This algorithm is extensively studied with the well-known model of power amplifier. It is shown that the system performance with 2 partitions based on least clipping noise is close to that with 3 partitions based on least PAPR, and performance with 3 partitions based on least clipping noise is even better than that with 4 partitions based on least PAPR. This way, the required computation is effectively reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
OFDM multicarrier systems support high data rate wireless transmission using orthogonal frequency channels, and require no extensive equalization, yet offer excellent immunity against fading and inter-symbol interference. The major drawback of these systems is the large Peak-to-Average power Ratio (PAR) of the transmit signal, which renders a straightforward implementation very costly and inefficient. Existing approaches that attack this PAR issue are abundant, but no systematic framework or comparison between them exist to date. They sometimes even differ in the problem definition itself and consequently in the basic approach to follow. In this work, we provide a systematic approach that resolves this ambiguity and spans the existing PAR solutions. The basis of our framework is the observation that efficient system implementations require a reduced signal dynamic range. This range reduction can be modeled as a hard limiting, also referred to as clipping, where the extra distortion has to be considered as part of the total noise tradeoff. We illustrate that the different PAR solutions manipulate this tradeoff in alternative ways in order to improve the performance. Furthermore, we discuss and compare a broad range of such techniques and organize them into three classes: block coding, clip effect transformation and probabilistic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space-time codes from orthogonal designs have two advantages, namely, fast maximum-likelihood (ML) decoding and full diversity. Rate 1 real (PAM) space-time codes (real orthogonal designs) for any transmit antennas have been constructed from the real Hurwitz-Radon families, which also provides the rate 1/2 complex (QAM) space-time codes (complex orthogonal designs) for any number of transmit antennas. Rate 3/4 complex orthogonal designs (space-time codes)for 3 and 4 transmit antennas have existed in the literature but no high rate (>½) complex orthogonal designs for other numbers of transmit antennas exists. In this correspondence, we present rate 7/11 and rate 3/5 generalized complex orthogonal designs for 5 and 6 transmit antennas, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports on field measurements of a highly versatile frequency-hopped (FH) frequency shift keyed (FSK) prototype operating in the 900 MHz ISM band and providing 160 kbps indoor wireless data communication. The testbed is used to study the impact of frequency hopping rate, channel spacing, diversity combining, and packet communication on frequency-hopped wireless transmission. Results are presented for 5,850 field trials of the testbed, programmed with different hopping configurations in three typical indoor environments. The measurement results underscore the dramatic potential of a system with fast hopping rates that employs equal gain hop combining with proper interleaving and frequency separation between hopping channels. However, fast hopping rate employed without diversity combining degrades symbol error rate (SER) almost linearly proportional to the hopping rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Parallel (PIC) and successive (SIC) interference cancellations are very effective, while with relatively low computational complexity among existing multiuser detection (MUD) methods. However, SIC is likely impractical for real-time applications, and PIC has a requirement of stringent power control. This paper proposes an adaptive scheme of linear hybrid interference cancellation (AHIC) for CDMA systems. This algorithm combines the advantages of PIC and SIC by the use of adaptive configuration to combat the fading effects of mobile channels. Simulations are conducted for comparison between AHIC and existing SIC and PIC schemes in terms of computational complexity, time delay, and average bit-error rate (BER) performance in Rayleigh-fading channels. It is suggested that the proposed AHIC may achieve a low BER, short delay, and acceptable computational complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a new adaptive blind equalization algorithm for multicarrier CDMA systems with single or multiple receive antennas. We analyze the cost function in the well-known subspace method and interpret it in terms of the noise projection matrix. The projection matrix is shown to be a special weighted spectral decomposition of the data autocorrelation matrix, which can be effectively approximated by inverting the data autocorrelation matrix. By adding a user specific correction term to the common cost function, we show that all user channel impulse responses can be estimated in parallel. In this way we develop a block algorithm with low complexity first. We then derive a recursive algorithm using RLS-type matrix updating. Simulations show our recursive algorithm has fast convergence and is near-far resistant. The bit error rate performance is also shown to improve as receiver diversity increases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multicarrier modulation (MCM) with diversity is a promising technique for multimedia communications over fading multipath wireless channels. In this work, we first investigate the channel estimation problem for the MCM system with multiple transmitter antennas. A model based channel estimation approach is proposed to identify the multiple channels simultaneously. We then apply the model based channel estimation in a joint source and channel matching (JSCM) scheme for the robust progressive image transmission. It is shown that the channel estimation affects not only the image decoding at the receiver but also the rate allocation at the transmitter. It is important to jointly consider the channel estimation and JSCM scheme to improve the quality of image transmission.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new approach for blind reverberation cancellation by adaptively estimating the channels. The key idea of this approach is to exploit the connection between the noise projection matrix and the cost matrix in the well-known subspace approach. A special weighted spectral decomposition is suggested to approximate the noise projection matrix directly from the inverse of the data autocorrelation matrix. We develop an off-line batch algorithm without eigendecomposition first. Combined with RLS-type matrix updating, an on-line adaptive algorithm is derived next to track time-varying channels. Simulations show our methods are robust for speech distorted by FIR reverberation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we derive the maximum-likelihood (ML) location estimator for wideband sources in the near-field of a passive array. The parameters of interest are expanded to include the source range in addition to the angles in the far-field case. The ML estimator is optimized in a single step as opposed to many that are optimized separately in relative time-delay and source location estimations. The ML method is capable of estimating multiple source locations, while such case is rather difficult for the time-delay methods. To avoid a multi-dimensional search in the ML metric, we propose an efficient alternating projection procedure that is based on sequential iterative search on single source parameters. In the single source case, the ML estimator is shown to be equivalent to maximizing the sum of the weighted cross-correlations between time shifted sensor data. Furthermore, the ML formulation can expand the parameters to include the distance of a source to a sensor with unknown location. This provides inputs to our online unknown sensor location estimator, which is based on a least-squares fit to observations from multiple sources. The proposed algorithm has been shown to yield superior performance over other suboptimal techniques, and is efficient with respect to the derived Cramer-Rao bound. From the Cramer-Rao bound analyses, we find that better source location estimates can be obtained for high frequency signals than low frequency signals. In addition, large range estimation error results when the source signal is unknown, but such unknown parameter does not have much impact on angle estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We investigate the motion of a single particle in transition from one equilibrium state to another via time-frequency analysis. Between quasi-stationary regimes a sudden change of state occurs, and we show that the Cohen-Lee local variance tracks well this highly nonstationary, sudden transient motion. In the quasi-stationary regime, instantaneous equilibria yield simple harmonic motion when the amplitude of oscillation is sufficiently small. Nonlinear effects induce harmonic generation for larger amplitude oscillations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.