Many applications generate digital image sequences using a lens system, a light-transducing pixel array, and
sample-and-hold A/D electronics. Non-uniformity Correction (NUC) is an image processing operation that is
often required in such systems. The NUC operation subtracts the background from a temporally evolving image
on a pixel-by-pixel basis. Background estimation must be robust against image features, electronic glitches, and
sudden changes in background. The NUC is often applied in conjunction with a mechanical image dithering that
enables separation of the image from the static additive background. In real-time applications with large pixe-count
focal planes, the NUC must process large data bandwidths. The NUC algorithm must be operationally
efficient to process data in real time. This paper examines a number of NUC algorithms. It defines a set of
performance metrics and evaluates algorithm performance in terms of trades between these metrics and processing
cost. It provides a guide for selecting an appropriate NUC algorithm based on operating conditions and available
compute resources.
The high resolution imaging capability of Synthetic Aperture Radar (SAR) is largely unaffected by atmospheric conditions and has proven to be an indispensable asset in a variety of military and civilian applications. Application of SAR methodology for real-time imaging however carries with it the large computational complexity and storage requirements of the image-forming algorithms. Recently however, the rapidly diminishing cost of computing hardware and the related ascent of cluster-based computing, has made parallelization of these algorithms an appealing area of investigation. This paper describes a parallel SAR processor developed at MIT Lincoln Laboratory. Several novel technologies were employed in it's implementation, including pMatlab which is a parallel extension of standard Matlab that is also being developed at MIT Lincoln Laboratory. These technologies will be described later in the document. We begin with a brief description of the basic SAR algorithm.
KEYWORDS: Signal processing, Digital signal processing, Target detection, Image processing, Detection and tracking algorithms, Standards development, MATLAB, Commercial off the shelf technology, Signal to noise ratio, Software development
Interceptor missiles process IR images to locate an intended target
and guide the interceptor towards it. Signal processing requirements
have increased as the sensor bandwidth increases and interceptors
operate against more sophisticated targets. A typical interceptor
signal processing chain is comprised of two parts. Front-end video
processing operates on all pixels of the image and performs such
operations as non-uniformity correction (NUC), image stabilization,
frame integration and detection. Back-end target processing, which
tracks and classifies targets detected in the image, performs such
algorithms as Kalman tracking, spectral feature extraction and target
discrimination.
In the past, video processing was implemented using ASIC components or
FPGAs because computation requirements exceeded the throughput of
general-purpose processors. Target processing was performed using
hybrid architectures that included ASICs, DSPs and general-purpose
processors. The resulting systems tended to be function-specific, and
required custom software development. They were developed using
non-integrated toolsets and test equipment was developed along with
the processor platform. The lifespan of a system utilizing the signal
processing platform often spans decades, while the specialized nature
of processor hardware and software makes it difficult and costly to
upgrade. As a result, the signal processing systems often run on
outdated technology, algorithms are difficult to
update, and system effectiveness is impaired by the inability to
rapidly respond to new threats.
A new design approach is made possible three developments; Moore's Law
- driven improvement in computational throughput; a newly introduced
vector computing capability in general purpose processors; and a
modern set of open interface software standards. Today's
multiprocessor commercial-off-the-shelf (COTS) platforms have
sufficient throughput to support interceptor signal processing
requirements. This application may be programmed under existing
real-time operating systems using parallel processing software
libraries, resulting in highly portable code that can be rapidly
migrated to new platforms as processor technology evolves. Use of
standardized development tools and 3rd party software upgrades are
enabled as well as rapid upgrade of processing components as improved
algorithms are developed. The resulting weapon system will have a
superior processing capability over a custom approach at the time of
deployment as a result of a shorter development cycles and use of
newer technology. The signal processing computer may be
upgraded over the lifecycle of the weapon system, and can
migrate between weapon system variants enabled by modification
simplicity.
This paper presents a reference design using the new approach that
utilizes an Altivec PowerPC parallel COTS platform. It uses a
VxWorks-based real-time operating system (RTOS), and application code
developed using an efficient parallel vector library (PVL). A
quantification of computing requirements and demonstration of
interceptor algorithm operating on this real-time platform are
provided.
KEYWORDS: Very large scale integration, Computing systems, Chemical elements, Statistical analysis, Sensors, Matrices, Computer architecture, Radar, Signal processing, Electroluminescence
Adaptive array systems require the periodic solution of the well-known w=R1v equation in order to compute optimum adaptive array weights. The covariance matrix R is estimated by forming a product of noise sample matrices X:R=XHX. The operations-count cost of performing the required matrix inversion in real time can be prohibitively high for a high bandwidth system with a large number of sensors. Specialized hardware may be required to execute the requisite computations in real time. The choice of algorithm to perform these computations must be considered in conjunction with the hardware technology used to implement the computation engine. A systolic architecture implementation of the Givens rotation method for matrix inversion was selected to perform adaptive weight computation. The bit-level systolic approach enables a simple ASIC design and a very low power implementation. The bit-level systolic architecture must be implemented with fixed-point arithmetic to simplify the propagation of data through the computation cells. The Givens rotation approach has a highly parallel implementation and is ideally suited for a systolic implementation. Additionally, the adaptive weights are computed directly from the sample matrix X in the voltage domain, thus reducing the required dynamic range needed in carrying out the computations. An analysis was performed to determine the required fixed-point precision needed to compute the weights for an adaptive array system operating in the presence of interference. Based on the analysis results, it was determined that the precision of a floating-point computation can be well approximated with a 13-bit to 19-bit word length fixed point computation for typical system jammer-to-noise levels. This property has produced an order-of-magnitude reduction in required hardware complexity. A synthesis-based ASIC design process was used to generate preliminary layouts. These layouts were used to estimate the area and throughput of the VLSI QR decomposition architecture. The results show that this QR decomposition process, when implemented into a full-custom design, provides a computation time that is two orders of magnitude faster than a state-of-the-art microprocessor.
An architecture is presented for front-end processing in a wideband array system which samples real signals. Such a system may be encountered in cellular telephony, radar, or low SNR digital communications receivers. The subbanding of data enables system data rate reduction, and creates a narrowband condition for adaptive processing within the subbands. The front-end performs passband filtering, equalization, subband decomposition and adaptive beamforming. The subbanding operation is efficiently implemented using a prototype lowpass finite impulse response (FIR) filter, decomposed into polyphase form, combined with a Fast Fourier Transform (FFT) block and a bank of modulating postmultipliers. If the system acquires real inputs, a single FFT may be used to operate on two channels, but a channel separation network is then required for recovery of individual channel data. A sequence of steps is described based on data transformation techniques that enables a maximally efficient implementation of the processing stages and eliminates the need for channel separation. Operation count is reduced, and several layers of processing are eliminated.
KEYWORDS: Signal processing, Electronic filtering, Radar, Distortion, Digital filtering, Signal attenuation, Optical filters, Radar signal processing, Phased arrays, Modulation
Subband-domain algorithms provide an attractive technique for wideband radar array processing. The subband-domain approach decomposes a received wideband signal into a set of narrowband signals. While the number of processing threads in the system increases, the narrowband signals within each subband can be sampled at a correspondingly slower rate. Therefore, the data rate at the input is similar to that at the output of the subband processor. There are several advantages to the subbanding method. It can simplify typical radar algorithms such as adaptive beamforming and equalization by the virtue of reducing subband signal bandwidth, thereby potentially reducing the computational complexity over an equivalent tapped-delay line approach. It also allows for a greater parallelization of the processing task, hence enabling the use of slower and less power consuming hardware. In order to evaluate the validity of the subbanding approach, it is compared with conventional processing methods. This paper focuses on adaptive beamforming and pulse compression performance for a wideband radar system. The performance of an adaptive beamformer is given for a polyphase filter based subband approach and is measured against narrowband processing. SINR loss curves and beampatterns for a subband system are presented. Design criteria for subband polyphase filter processing that minimizes signal distortion are provided and the distortion is characterized. Finally subband- domain pulse compression is demonstrated and compared with the conventional approach.
KEYWORDS: Signal to noise ratio, Sensors, Error analysis, Interference (communication), Acoustics, Signal attenuation, Statistical analysis, Monte Carlo methods, Environmental sensing, Signal processing
Microphone arrays can be used for high-quality sound pick up in reverberant and noisy environments. The beamforming capabilities of microphone array systems allow highly directional sound capture, providing superior signal-to-noise ratio (SNR) when compared to single microphone performance. There are two aspects in microphone array system performance: The ability of the system to locate and track sound sources, and its ability to selectively capture sound from those sources. Both aspects of system performance are strongly affected by the spatial placement of microphone sensors. A method is needed to optimize sensor placement based on geometry of the environment and assumed sound source behavior. The objective of the optimization is to obtain the greatest average system SNR using a specified number of sensors. A method is derived to evaluate array performance for a given array configuration defined by the above mentioned metrics. An overall performance function is described based on these metrics. A framework for optimum placement of sensors under the practical considerations of possible sensor placement and potential location of sound sources is also characterized.
KEYWORDS: Digital signal processing, Error analysis, Cameras, Imaging systems, Sensors, Interference (communication), Signal to noise ratio, Signal detection, Statistical analysis, Video
The design, implementation, and performance of a low-cost, real-time DSP system for source location is discussed. The system consists of an 8-element electret microphone array connected to a Signalogic DSP daughterboard hosted by a PC. The system determines the location of a speaker in the audience in an irregularly shaped auditorium. The auditorium presents a non-ideal acoustical environment; some of the walls are acoustically treated, but there still exists significant reverberation and a large amount of low frequency noise from fans in the ceiling. The source location algorithm is implemented in a two step process. The first step determines time delay of arrival (TDOA) for select microphone pairs. A modified version of the Cross- Power Spectrum Phase Method is used to compute TDOAs and is implemented on the DSP daughterboard. The second step uses the computed TDOAs in a least mean squares gradient descent search algorithm implemented on the PC to compute a location estimate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.