PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
There are many good concepts that have been developed that collectively enable the achievement of incredibly high levels of performance in current computers. In this paper, seven key concepts are identified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the methods we used to achieve an exhaustive comparison of specific arithmetic operators and the result of this comparative study. The operators we were interested in are modular adders that can be used for Residue Number System (RNS) processors. RNS arithmetic provides an alternative way to produce highly effective multiplication and addition and are, therefore, of great interest for signal processing processors. As modular adders are at the root of any RNS processor, attention must be payed to their design. We expose three different existing designs for such adders and through the construction and use of generators that produce 0.35μ standard cell architectures, we synthesized those three designs for all odd moduli from 4 to 15 bits and measured their performance. Performance was measured after placement and routing of those operators providing precise results. The exhaustive data obtained let us compare those three designs based on size, speed or any combination of those two fore-mentioned factors. Eventually this study gives clues on choosing a specific modular adder for a given modulus and also for choosing the best candidates for a well balanced residue base (i.e. choosing a good set of moduli). Furthermore, it shows that the described parallel modular adder is generally the best choice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current trend of exponential increases in clock frequency and an increase in the number of transistors per die causes increases in power consumption, total die area dedicated to the clock distribution network, and clock overhead incurred relative to the clock cycle time. Self-timed circuits may provide an alternative approach to synchronous circuit design that helps to reduce the negative characteristics of the high-speed clocks needed by synchronous circuits. This work presents a gate-level performance model and transistor-level performance, power and area approximations for both self-timed and static CMOS ripple-carry adders. These results show that for self-timed circuits with uniformly random input operands the average performance of a ripple-carry adder is logarithmic and improves performance by 37% with a 30% increase in the total transistor width as compared to a static CMOS ripple-carry adder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although Dadda multipliers offer the greatest speed potential with a delay proportional to log(n), they are not often used in everyday designs because of their irregular structure and the ensuing difficulty this entails in their implementation. This paper presents a program which automatically generates HDL code describing a Dadda multiplier of specified size. The resulting HDL code is then synthesized to a generic library in the TSMC13G process (0.13um). It is observed that delay increases only marginally when increasing the multiplier size from 16 to 64 bits, while total area increases drastically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes truncated squarers, which are specialized squarers with a portion of the squaring matrix eliminated. Rounding error and errors due to matrix reduction are quantified and analyzed. Constant and variable correction techniques are presented that minimize either the mean error or the maximum absolute error as required by the application. Area and delay estimates are presented for a number of designs, as well as error statistics obtained both analytically and numerically by exhaustive simulation. As an example, one design of a 16-bit truncated squarer using constant correction is 10.1% faster and requires 27.9% less area than a comparable standard squarer with true rounding. The range of error for this truncated squarer is -0.892 to +0.625 ulps, compared to +/-0.5 ulps for the standard squarer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a hardware-oriented design of a complex division algorithm. This algorithm is similar to a radix-r digit-recurrence division algorithm with real operands and prescaling. Prescaling of complex operands allows efficient selection of complex quotient digits in higher radix. The use of the digit-recurrence method allows hardware implementation similar to that of conventional dividers. Moreover, this method makes correct rounding of complex quotient possible. On the other hand, the proposed scheme requires the use of prescaling tables which are more demanding than tables in similar dividers with real operands. In this paper we present main design ideas, implementation details, and give a rough estimate of the expected latency. We also make a comparison with the estimated latency of the Smith's algorithm used in software routines for complex division.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residue Number Systems (RNS) allow the distribution of large dynamic range computations over small modular rings, which allows the speed up of computations. This feature is well known, and already used in both DSP and cryptography. Most of implementations use RNS bases of three elements to reduce the complexity of conversions, but if can increase the number of RNS modular computational channels, then we are able to compute over smaller rings and thus further increase the speed of computation. In this paper, we deal with conversion from RNS to RNS or RNS to standard representations of numbers. We find, in the literature, two classes of conversion algorithms: those directly based on the chinese remainder theorem and those which use an intermediate Mixed Radix representation. We analyze these two different methods, show where the choice of the base is important and discuss the base selection criteria. We deduce that MRS conversions offer more possibilities than the CRT conversions. We provide features of RNS bases which provide low complexity of both RNS computation and conversion. We introduce some examples of bases well suited for cryptography applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is an attempt to bring some theory on the top of some previously unproved experimental statements about the double-base number system (DBNS). We use results from diophantine approximation to address the problem of converting integers into DBNS. Although the material presented in this article is mainly theoretical, the proposed algorithm could lead to very efficient implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantum-dot Cellular Automata (QCA) is a nanotechnology which has
potential applications in future computers. In this paper, a
method for reducing the number of majority gates (a QCA logic
primitive) is developed to facilitate the conversion of SOP
expressions of three-variable Boolean functions into QCA majority
logic. Thirteen standard functions are proposed to represent all
three-variable Boolean functions and the simplified majority
expressions corresponding to these standard functions are
presented. By applying this method, a one-bit QCA adder, with only
three majority gates and two inverters, is constructed. We will
show that the proposed method is very efficient and fast in
deriving the simplified majority expressions in QCA design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a C library for the software support of single precision floating-point (FP) arithmetic on processors without FP hardware units such as VLIW or DSP processor cores for embedded applications. This library provides several levels of compliance to the IEEE 754 FP standard. The complete specifications of the standard can be used or just some relaxed characteristics such as restricted rounding modes or computations without denormal numbers. This library is evaluated on the ST200 VLIW processors from STMicroelectronics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous research shows the Signed Logarithmic Number System (SLNS) offers lower power consumption than the fixed-point number system for MPEG decoding. SLNS represents a value with the logarithm of its absolute value and a sign bit. Subtraction is harder in SLNS than other operations. This paper examines a variant, Dual-Redundant LNS (DRLNS), where addition and subtraction are equally easy, but DRLNS-by-DRLNS multiplication is not. DRLNS represents a value as the difference of two terms, both of which are represented logarithmically. DRLNS is appropriate for the Inverse Discrete Cosine Transform (IDCT) used in MPEG decoding because a novel accumulator register can contain the sum in DRLNS, but the products are fed to this accumulator in non-redundant SLNS format. Since DRLNS doubles the word size, the accumulator needs to be converted back into SLNS. This paper considers two such methods. One computes the difference of the two parts using LNS. The other converts the two parts separately to fixed point and then computes the logarithm of their difference. A novel factoring of a common term out of the two parts reduces the bus widths. Mitchell's low-cost logarithm/antilogarithm approximation is shown to produce acceptable visual results in this conversion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Logarithmic Number System (LNS) has lower power and larger dynamic range than fixed point, which makes LNS suitable for designing low-power, portable devices. Motion estimation is a key part of the MPEG encoding system. This paper introduces LNS into motion estimation for the MPEG encoding system. The block matching technique is the most commonly used motion-estimation method in MPEG encoding. The Mean Absolute Difference (MAD) is an inexpensive fixed-point cost function, which uses the sum of the absolute difference of the pixel values in the reference and encoded frames. Since LNS addition and subtraction are expensive, we propose the quotient of the two pixels' values instead of the difference. LNS division only needs a fixed-point subtractor. Similar to the absolute difference, we take the quotient of the larger value over the smaller value. We call this new cost function Mean Larger Ratio (MLR). The product of such ratios is calculated for each of the macroblocks in MPEG frames. Using MLR, LNS has approximately the same hardware as MAD for fixed point. Example videos show MLR provides a practical cost function to perform motion estimation with LNS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new class of nonlinear filters for color image processing was proposed by Lucchese and Mitra. This type of color filter processes the chromatic component of images encoded in the International Commission on Illumination (CIE) u'v' color space. Images processed by this filter do not show color shifts near edges between regions with different intensities. The filter uses linear convolution operations internally and is effective and efficient for denoising and regularizing color images. Image processing systems are computationally intensive and usually require a large amount of area in order to reach desirable levels of performance. The use of on-line arithmetic can decrease the area of the hardware implementation and still maintain a reasonable throughput. This work presents the design of the color filter as a network of on-line arithmetic modules. The network topology and some detail of each arithmetic module are provided. The final implementation targets FPGAs and it is compared in terms of area against an estimate of a conventional design. The throughput of this solution is capable of supporting real-time processing of common image formats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interceptor missiles process IR images to locate an intended target
and guide the interceptor towards it. Signal processing requirements
have increased as the sensor bandwidth increases and interceptors
operate against more sophisticated targets. A typical interceptor
signal processing chain is comprised of two parts. Front-end video
processing operates on all pixels of the image and performs such
operations as non-uniformity correction (NUC), image stabilization,
frame integration and detection. Back-end target processing, which
tracks and classifies targets detected in the image, performs such
algorithms as Kalman tracking, spectral feature extraction and target
discrimination.
In the past, video processing was implemented using ASIC components or
FPGAs because computation requirements exceeded the throughput of
general-purpose processors. Target processing was performed using
hybrid architectures that included ASICs, DSPs and general-purpose
processors. The resulting systems tended to be function-specific, and
required custom software development. They were developed using
non-integrated toolsets and test equipment was developed along with
the processor platform. The lifespan of a system utilizing the signal
processing platform often spans decades, while the specialized nature
of processor hardware and software makes it difficult and costly to
upgrade. As a result, the signal processing systems often run on
outdated technology, algorithms are difficult to
update, and system effectiveness is impaired by the inability to
rapidly respond to new threats.
A new design approach is made possible three developments; Moore's Law
- driven improvement in computational throughput; a newly introduced
vector computing capability in general purpose processors; and a
modern set of open interface software standards. Today's
multiprocessor commercial-off-the-shelf (COTS) platforms have
sufficient throughput to support interceptor signal processing
requirements. This application may be programmed under existing
real-time operating systems using parallel processing software
libraries, resulting in highly portable code that can be rapidly
migrated to new platforms as processor technology evolves. Use of
standardized development tools and 3rd party software upgrades are
enabled as well as rapid upgrade of processing components as improved
algorithms are developed. The resulting weapon system will have a
superior processing capability over a custom approach at the time of
deployment as a result of a shorter development cycles and use of
newer technology. The signal processing computer may be
upgraded over the lifecycle of the weapon system, and can
migrate between weapon system variants enabled by modification
simplicity.
This paper presents a reference design using the new approach that
utilizes an Altivec PowerPC parallel COTS platform. It uses a
VxWorks-based real-time operating system (RTOS), and application code
developed using an efficient parallel vector library (PVL). A
quantification of computing requirements and demonstration of
interceptor algorithm operating on this real-time platform are
provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a concept in which an array of coupled nonlinear
oscillators is used for beamforming in phased array receivers. The
signal that each sensing element receives, beam steered by time
delays, is input to a nonlinear oscillator. The nonlinear
oscillators for each element are in turn coupled to each other.
For incident signals sufficiently close to the steering angle, the
oscillator array will synchronize to the forcing signal whereas
more obliquely incident signals will not induce synchronization.
The beam pattern that results can show a narrower mainlobe and
lower sidelobes than the equivalent conventional linear
beamformer. We present a theoretical analysis to explain the beam
pattern of the nonlinear oscillator array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to present a high data rate transmission system through the ionospheric channel in the HF band (3-30 MHz). The applications expected in this study are image transmission and real-time videoconferencing. Very high rates are required compared to the standard modems (4.8 kbits/s). Therefore, an array processing performs in the multi channel receiving system. Its main originality stands in the development of a compact device of collocated antennas, the spatial responses of which are different one from each other. Besides, synchronization (Zero Crossing Detector) and spatio temporal equalization (L.M.S. algorithm) as well resort to classical and well-tested techniques involving training sequences. An experimental radio link has been under test with a range of 800 km. The corresponding results underlines the improvement of the bit transfer rate which reaches 20 kbits/s (QAM 16 in a 6 kHz bandwidth) or 30 kbits/s (QAM 16 in a 9 kHz bandwidth). Several transmitted images are presented and appear quite consistent with the original.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we report on efforts to develop signal processing methods appropriate for the detection of man-made electromagnetic signals in the nonlinear and nonstationary underwater electromagnetic
noise environment of the littoral. Using recent advances in time series analysis methods [Huang et al., 1998], we present new techniques for detection and compare their effectiveness with conventional signal processing methods, using experimental data from recent field experiments. These techniques are based on an empirical mode decomposition which is used to isolate signals to be detected from noise without a priori assumptions. The decomposition generates a physically motivated basis for the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most previous amplitude- and frequency-modulation (AM and FM) decomposition methods assume that the AM component is non-negative. However this assumption is not always valid. Over-modulation, where the AM component has both positive and negative values, may be present in not only synthetic signals, but also in natural signals like speech and music. Assuming all non-negative values for AM in an over-modulated signal will introduce significant phase discontinuities in the FM estimate. Because of this, previous methods yield significant errors for instantaneous frequency (IF) estimation at AM zero-crossings. We propose a two-step algorithm that utilizes coherent demodulation to estimate AM and FM correctly for over-modulated signals. For synthetic signals, the algorithm produces very accurate AM and FM estimates; for band-passed speech signals, the algorithm corrects the discontinuities in the FM estimate and produces more physically reasonable results. The evaluation of source sensitivity for the algorithm shows that the estimation errors generally increase with AM and FM frequencies, but are insensitive to carrier frequency. The robustness in noise is relatively low in the over-modulation case due to very low local SNR at AM zero-crossings. Limitations of the algorithm and future work are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Abstract - When a continuous-time signal is sampled at a rate less than the Nyquist criterion, the signal is aliased. This distortion is usually irrecoverable. However, we show that for certain AM-FM signals, the distortion due to aliasing can be mitigated and an unaliased version of the signal can be recovered from its aliased samples. We present a method for determining whether or not a signal has potentially been distorted by aliasing, and an algorithm for recovering an unaliased version of the signal. The method is based on the manifestation of aliasing in the time-frequency plane, and estimating the instantaneous phase/frequency of the aliased signal.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an algorithm to accurately estimate the carrier frequency
of a single sideband HF speech signal. The algorithm is based on an
outer product applied to a complex-valued time-frequency representation in which instantaneous frequency is encoded as the complex argument and the spectrogram is encoded as magnitude. Simple matrix operations are applied to isolate and estimate the carrier. The algorithm is fast, efficient, easily coded and converges rapidly to a very accurate carrier estimate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We discuss pulse propagation in a dispersive medium with damping. We derive an explicit expression for the center of mass motion and show that when there is damping the center of mass does not travel with constant velocity, as is the case when there is no damping. We also derive an explicit relation connecting pulse propagation in the damped case with that of the undamped case. This allows the transformation from one case to the other. A number of exactly solvable examples are given to illustrate the equations derived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider dispersive propagation with damping in phase-space, and derive the Wigner distribution in terms of the initial wave and the dispersion relation. The case for no damping, that is, lossless propagation, has been previously considered by Cohen, and is a special case of the more general result presented here. Simple and physically revealing approximations of the Wigner distribution in terms of the initial Wigner distribution are presented. Also, exact low-order conditional moments are given and their interpretation is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Information processing theory aims to quantify how well signals
encode information and how well systems process information.
Time-frequency distributions have been used to represent the
energy distribution of time-varying signals for the past twenty
years. There has been a lot of research on various properties of
these representations. However, there is a general lack of
quantitative analysis in describing the amount of information
encoded into a time-frequency distribution. This paper aims to
quantify how well time-frequency distributions represent
information by using information-theoretic distance measures.
Different distance measures, such as Kullback-Leibler distance,
R\'{e}nyi distance, will be adapted to the time-frequency plane.
Their performance in quantifying the information in a given signal
will be compared. A sensitivity analysis for different distance
measures will be carried out to assess their robustness under
perturbation. Different example signals will be considered for
illustrating the information processing in time-frequency
distributions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We obtain the exact Wigner spectrum for non-white Gaussian noise. As a special case the Wigner spectrum for the Wiener process is obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new linear time-frequency paradigm in which the
instantaneous value of each signal component is mapped to the
curve functionally representing its instantaneous frequency.
The transform by which this surface is generated is linear,
uniquely defined by the signal decomposition and satisfies linear
marginal-like distribution properties. We further demonstrate
that such a surface may be estimated from the short time Fourier
transform by a concentration process based on the phase of the STFT
differentiated with respect to time. Interference may be identified
on the concentrated STFT surface, and the signal with the interference
removed may be estimated by applying the linear time marginal to the
concentrated STFT surface from which the interference components have
been removed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In electric power systems, the flow of electric power is an
important issue for the control and management of the system.
However, under transient-states caused by electrical disturbances,
it is not a simple task to determine the flow of transient
disturbance energy in an analytic way with high accuracy. The
proposed algorithm for the determination of transient disturbance
energy flow is based on cross time-frequency analysis that
provides time- and frequency- localized phase difference
information. Hence, based on the cross time-frequency distribution
of the transient voltage and current, the classical parameters in
power systems are modified for transient analysis. The transient
power factor angle will determine the direction of transient
disturbance energy (real and reactive) flows in power distribution
system networks. For the verification of the proposed algorithm, a
practical model of a power system is simulated by EMTP
(Electromagnetic Transient Program). In addition, knowledge of
this nature should greatly facilitate automatic identification of
transient events and determination of the physical location of the
source of various transient disturbances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method aimed at approximating the solution to differential equations with driving terms and whose solutions are non-stationary signals is described. The author has previously examined, using the approach of Galleani and Cohen the validity of the approximation method in phase space using the Wigner-distribution. He applied the method to second order differential equations and used for driving terms a variety of forcing functions that have smoothed and monotonically increasing phase functions. By examining the results, insight is gained into the nature of the solution and the associated dynamics of the system. This paper examines the approximation methods when the spectrogram is used, the spectrogram being the most widely used time-frequency distribution. The results show that the approximation scheme works very well for the spectrogram and in many cases works better than for the Wigner distribution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose two-dimensional (2-D) frequency
modulated (FM) signals for digital watermarking. The hidden
information is embedded into an image using the binary phase
information of a 2-D FM prototype. The original image is properly
partitioned into several blocks. In each block, a 2-D FM watermark
waveform is used and the watermark information is embedded using
the binary phase. The parameters of the FM watermark are selected
in order to achieve low bit error rate (BER) detection of the
watermark. Detailed study of performance analysis and parameter
optimizations is performed for 2-D chirp signals as an example of
2-D FM waveforms. Experimental results compare the proposed
methods and support their effectiveness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For functions with uniform samples, and functions with non uniform samples, methods exist to recreate the functions that generated the samples when certain well known conditions apply. Both of these problems presuppose that the sample locations are known. Fewer papers exist concerning samples with unknown locations. This paper outlines an algorithm for finding the sample locations when they are unknown, based on the signal being bandlimited. In comparison to previous work on this subject, this paper assumes the samples to be in continuous time, and so can easily handle a large number of samples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider communications and network systems whose properties are characterized by the gaps of the leading eigenvalues of (A Hermitian) times A for some matrix A . We show that a sufficient and necessary condition for a large eigen-gap is that A is a "hub" matrix in the sense that it has dominant columns. We describe an application of this hub theory in multiple-input and multiple-output (MIMO) wireless systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new constant modulus algorithm (CMA) type of error function for fast phase recovery of QAM signals was recently proposed in which the cost function of the error function recovers the phase simultaneously with the equalization. In this paper we propose variable step (VS) in the update equations of new error function to enhance the performance of convergence speed and residual inter-symbol interference (ISI). Effectiveness of the error function for well-behaved channel, under-modeled channel and channel disparity are presented. Simulation results are presented to compare the performance of VS, constant step (CS) and the modified CMA (MCMA) using 16-QAM signal constellation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the horizon radar (OTHR) is a well developed sensor technology in established use for long-range air and surface surveillance. More detailed information about the targets can be achieved by using simultaneous operation of multiple OTHRs. However, a key limitation with HF radar is the conflict between selection of an appropriate operating frequency and the demand for radar waveform bandwidth commensurate with the range resolution requirement of the radar. In this paper, we consider the simultaneous operation of two over-the-horizon radar systems that use the same frequency band with different chirp waveforms to respond the advanced wide-area surveillance needs without reducing the pulse repetitive frequency. A cross-radar interference cancellation technique is proposed and shown to be effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a method for tracking of a instantaneous equivalent bandwidth (IEBW) of non-stationary random signals. IEBW is defined on the positve time-frequency distribution of a non-stationary random signal by using Renyi entropy. It is a natural extension of a equivalent bandwidth for stationary random signals. In order to obtain the positive time-frequency satisfying the marginals of a random signal, we have modified a copula-based time-frequency technique slightly. We, then, showed the results of two simple computer simulations. The results show that the method presented here can track the IEBW of the random signals properly. We also applied the method to track the change of the IEBW of the heart sound. The results suggest that tracking the IEBW could be a useful index for automatic diagnostic of heart disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The insertion of a suitably designed phase plate in the pupil of an imaging system makes it possible to encode the depth dimension of an extended three-dimensional scene by means of an approximately shift-invariant PSF. The so-encoded image can then be deblurred digitally by standard image recovery algorithms to recoup the depth dependent detail of the original scene. A similar strategy can be adopted to compensate for certain monochromatic aberrations of the system. Here we consider two approaches to optimizing the design of the phase plate that are somewhat complementary - one based on Fisher information that attempts to reduce the sensitivity of the phase encoded image to misfocus and the other based on a minimax formulation of the sum of singular values of the system blurring matrix that attempts to maximize the resolution in the final image. Comparisons of these two optimization approaches are discussed. Our preliminary demonstration of the use of such pupil-phase engineering to successfully control system aberrations, particularly spherical aberration, is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Computational imaging systems are modern systems that consist of generalized aspheric optics and image processing capability. These systems can be optimized to greatly increase the performance above systems consisting solely of traditional optics. Computational imaging technology can be used to advantage in iris recognition applications. A major difficulty in current iris recognition systems is a very shallow depth-of-field that limits system usability and increases system complexity. We first review some current iris recognition algorithms, and then describe computational imaging approaches to iris recognition using cubic phase wavefront encoding. These new approaches can greatly increase the depth-of-field over that possible with traditional optics, while keeping sufficient recognition accuracy. In these approaches the combination of optics, detectors, and image processing all contribute to the iris recognition accuracy and efficiency. We describe different optimization methods for designing the optics and the image processing algorithms, and provide laboratory and simulation results from applying these systems and results on restoring the intermediate phase encoded images using both direct Wiener filter and iterative conjugate gradient methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To achieve scale and rotation invariant pattern recognition, we implemented synthetic discriminant function (SDF) based reference image in a nonzero order Fringe-adjusted Joint Transform Correlator (FJTC) using binary random phase mask. The binary random phase mask encodes the SDF based reference image before it is introduced in the joint input image. The joint power spectrum is then multiplied by the phase mask to remove the zero-order term and false alarms that may be generated in the correlation plane due to the presence of multiple identical target or non-target objects in the input scene. Detailed analysis for the proposed SDF based nonzero order FJTC using binary random phase mask is presented. Simulation results verify the effectiveness of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical light microscopy is a predominant modality for imaging living cells, with the maximum resolution typically diffraction limited to approximately 200nm. The objective of this project is to enhance the resolution capabilities of optical light microscopes using image-processing algorithms, to produce super-resolved imagery at a sub-pixel level. The sub-pixel algorithm is based on maximum-likelihood iterative deconvolution of photon-limited data, and reconstructs the image at a finer scale than the pixel limitation of the camera. The software enhances the versatility of light microscopes, and enables the observation of sub-cellular components at a resolution two to three times finer than previously. Adaptive blind deconvolution is used to automatically determine the point spread function from the observed data. The technology also allows camera-binned or sub-sampled (aliased) data to be correctly processed. Initial investigations used computer simulations and 3D imagery from widefield epi-fluorescence light microscopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a parametric, shape-based image reconstruction algorithm
for the joint reconstruction of the optical absorption and diffusion coefficients in the brain using diffuse optical tomographic data. Specifically, we study the recovery of the geometry of an unknown number of 2D closed contours located on a 2D manifold (the cortex) in 3-space. We describe an approach for a brain model in which we assume the existence of a one-to-one map from the surface of the cortex to a subset of the plane. We use a new, parametric level set approach to map shapes on the plane to structures on the cortex. Our optimization-based reconstruction algorithm evolves shapes on the plane while finding absorption and reduced scattering values inside each shape. Preliminary numerical simulation results show the promise of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ultrasound tomography is a bioimaging method that combines the geometry of X-ray computed tomography with the non-ionizing energy of ultrasound. This modality has potential clinical utility in breast cancer screening and diagnosis. In conventional ultrasound tomography, data sets from different interrogation angles are used to reconstruct an estimate of a biomechanical property of the tissue, such as sound velocity, in the form of an image. Here we describe an alternative method of reconstruction using novel algorithms which weight the data based on a "quality" score. The quality score is derived from beamforming characteristics, for example, the weighting of angle-dependent data by its distance from the transmit focal zones. The new approach is that for each data set (taken at a different view angle), the reliability of the data (in the range dimension) is assumed to vary. By fusing (combining) the data based on the quality score, a complete image is formed. In this paper, we describe the construction of a rotational translation stage and tissue-mimicking phantoms that are used in conjunction with a commercial medical ultrasound machine to test our reconstruction algorithms. The new algorithms were found to increase the contrast-to-speckle ratio of simulated cysts by 114% from raw data over a 77% improvement by spatial compounding (averaging), and to decrease wire target width by 54% over a 39% reduction by spatial compounding alone. The new method shows promise as a computationally efficient method of improving contrast and resolution in ultrasound images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Early detection of tissue changes in a disease process is of utmost interest and a challenge for non-invasive imaging techniques. Texture is an important property of image regions and many texture descriptors have been proposed in the literature. In this paper we introduce a new approach related to texture descriptors and texture grouping. There exist some applications, e.g. shape from texture, that require a more dense sampling as provided by the pseudo-Wigner distribution. Therefore, the first step to the problem is to use a modular pattern detection in textured images based on the use of a Pseudo-Wigner Distribution (PWD) followed by a PCA stage. The second scheme is to consider a direct local frequency analysis by splitting the PWD spectra following a "cortex-like" structure. As an alternative technique, the use of a Gabor multiresolution approach was considered. Gabor functions constitute a family of band-pass filters that gather the most salient properties of spatial frequency and orientation selectivity. This paper presents a comparison of time-frequency methods, based on the use of the PWD, with sparse filtering approaches using a Gabor-based multiresolution representation. Performance the current methods is evaluated for the segmentation for synthetic texture mosaics and for osteoporosis images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many iterative methods that are used to solve Ax=b can be derived as quasi-Newton methods for minimizing the quadratic function 1/2 xTATAx-xTATb. In this paper, several such methods are considered, including conjugate gradient least squares (CGLS), Barzilai-Borwein (BB), residual norm steepest descent (RNSD) and Landweber (LW). Regularization properties of these methods are studied by analyzing the so-called "filter factors". The algorithm proposed by Barzilai and Borwein is shown to have very favorable regularization and convergence properties. Secondly, we find that preconditioning can result in much better convergence properties for these iterative methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present three orhtogonalization schemes for stabilizing Lanczos tridiagonalization of a complex symmetric matrix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many experiments that require a highly accurate continuous time history of photon emission incorporate streak cameras into their setup. Nonlinear recordings both in time and spatial displacement are inherent to streak camera measurements. These nonlinearities can be attributed to sweep rate electronics, curvature of the electron optics, the magnification, and resolution of the electron optics. These nonlinearities are systematic; it has been shown that a short pulse laser source, an air-spaced etalon of known separation, and a defined spatial resolution mask can provide the proper image information to correct for the resulting distortion. A set of Interactive Data Language (IDL) software routines were developed to take a series of calibration images showing temporally and spatially displaced points, and map these points from a nonlinear to a linear space-time resultant function. This correction function, in combination with standardized image correction techniques, can be applied to experiment data to minimize systematic errors and improve temporal and spatial resolution measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A streak camera is a recording instrument in which spatial image is swept in time, creating a spatial-temporal image on a charge-coupled device (CCD). Traditional analysis for captured image data has been using uniform grid as sampling points, in which a block of CCD pixel readouts are summed to give one reading. Equivalently simple area moving averages are applied concurrently while sampling, and high frequency content is reduced. To solve this problem, we use peak-value sampling procedure, based on the view from photoelectron statistics. After background correction, maximum values in spatial dimensions are selected to obtain time series data. A DSP filter is then applied and optimized for this time series. A Welch algorithm fast Fourier transform is applied to obtain power spectra. Segmented cumulative spectra is then calculated for global statistics and related to time domain fluctuations. Self similarity at different sweeping time-scales is used to recognize CCD pattern noise. Sinusoidal pattern noise is automatically corrected by peak-value sampling. Computational results show that time-frequency analysis using peak-value sampling algorithm and similar variants is far more effective in discovering high frequency oscillatory noise than traditional uniform binned sampling. We have applied this algorithm to analyze data produced by a 4096x4096 CCD streak camera illuminated with a macro pulse laser.
High frequency oscillations in 6~10 GHz region were found in laser spectra. Spatial-temporal oscillations of this range are difficult to diagnose with conventional optoelectronic detectors on a per-shot basis. This work has led to improvement of laser design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.