Radar target detection is determined by the energy received from the target and compared with the energy of background noise. The radar range equation accounts for the signal-to-noise ratio (SNR) due to transmitter, return path, receiver, integration, losses, and radar cross sections of targets. Frequency Modulated Continuous Wave (FMCW) radars are effective in distinguishing between moving targets and clutter. However, a weak target in the presence strong clutter can be easily overwhelmed, especially when the target is slow moving. In addition, a slow moving target can be undetected in the presence of wind-blown foliage. Wind-blown foliage can contribute to Doppler shifts caused by movements of branches and leaves, which can be challenging in target detection.
In this paper, we will discuss clutter mitigation caused by wind-blown foliage and clutter mitigation with slow-moving targets. Traditional approaches, such a pulse canceler essentially is a low-pass filter is designed to remove slow moving clutter and is not effective in mitigating foliage clutter during windy conditions. In this paper, we introduce a method to pre-process radar returns with a wavelet transform. The wavelet transform produces subband channels that are progressively smaller which can reduce the order of operations. The subband channels will further be processed with coherent integration and coherent subtraction to mitigate strong clutter introduced by stationary objects that close to a target. We will also investigate the mitigation of clutter due to wind-blown foliage using subband channels to estimate covariance, and extract singular value decompositions. A detection of wind-blown clutter is kept track in temporal bookkeeping, called a clutter map. The entry in the clutter map is deleted when clutter is not present or the expiration of the entry is reached.
KEYWORDS: Radar, Signal processing, Data processing, Computer simulations, Embedded systems, Computing systems, Antennas, Radar signal processing, Monte Carlo methods, Optical simulations
A radar system created using an embedded computer system needs testing. The way to test an embedded computer
system is different from the debugging approaches used on desktop computers. One way to test a radar system is to feed
it artificial inputs and analyze the outputs of the radar. More often, not all of the building blocks of the radar system are
available to test. This will require the engineer to test parts of the radar system using a "black box" approach. A
common way to test software code on a desktop simulation is to use breakpoints so that is pauses after each cycle
through its calculations. The outputs are compared against the values that are expected. This requires the engineer to
use valid test scenarios. We will present a hardware-in-the-loop simulator that allows the embedded system to think it is
operating with real-world inputs and outputs. From the embedded system's point of view, it is operating in real-time.
The hardware in the loop simulation is based on our Desktop PC Simulation (PCS) testbed. In the past, PCS was used
for ground-based radars. This embedded simulation, called Embedded PCS, allows a rapid simulated evaluation of
ground-based radar performance in a laboratory environment.
KEYWORDS: Radar, Monte Carlo methods, Signal processing, Antennas, Computer simulations, Weapons, Radar signal processing, Target detection, Optical simulations, Signal detection
Thales-Raytheon Systems' Firefinder PC Simulation (PCS) tool allows a rapid simulated evaluation of Firefinder radar
performance from a personal desktop computer. Firefinder radars are designed to track hostile rocket, artillery and
mortar projectiles in order to accurately estimate weapon ground location. The Firefinder tactical code is used within
PCS. This design provides a low risk path to rapid prototyping and evaluation of candidate software changes. PCS is
used to evaluate candidate software changes to the Firefinder. Candidate design changes which perform well in PCS
testing require minimum system level checkout before being checked into the tactical software baseline. The PCS tool
contains a simulation engine which reads program control information from input data files. The PCS tool also generates
and maintains simulated targets and clutter, simulates the radar signal processing function, performs Monte-Carlo
"batch" processing, produces complex target trajectories internally or from an input text file and creates simulation data
recording files identical in format to those created by the actual radar. This paper summarizes the capabilities of the
Firefinder PCS and the additional of false location reduction features to the simulation.
Comparing two similar images is often needed to evaluate the effectiveness of an image processing algorithm. But,
there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak signal to noise
ratio (PSNR) are used. These measures rely entirely on pixel intensities. Though these measures are well understood
and easy to implement, they do not correlate well with perceived image quality. This paper will present an image quality
metric that analyzes image structure rather than entirely on pixels. It extracts image structure with the use of a recursive
quadtree decomposition. A similarity comparison function based on contrast, luminance, and structure will be presented.
Comparing two similar images is often needed to evaluate the effectiveness of an image processing algorithm. But,
there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak signal to noise
ratio (PSNR) are used. These measures rely entirely on pixel intensities. Though these measures are well understood
and easy to implement, they do not correlate well with perceived image quality. This paper will present an image quality
metric that analyzes image structure rather than entirely on pixels. It extracts image structure with the use of a recursive
quadtree decomposition. A similarity comparison function based on contrast, luminance, and structure will be presented.
Resizing an image is an important technique in image processing. When increasing the size of the image, some details of the image are smeared or blurred with common interpolation techniques, such as bilinear interpolation. Edges do not appear as sharp as the original image. In addition, when performing interpolation with high magnification, blocking effects start to appear. In this paper, we present an approach that performs interpolation in the direction of the edges, rather than just horizontal and vertical direction. A wavelet preprocessing is used to extract edge direction information
before performing interpolation in multiple directions.
Some image processing applications require an image meet a quality metric before processing it. If an image is too degraded such that it is difficult or impossible to reconstruct, the input image may be discarded. In this paper, we present a metric that measures the relative sharpness with respect to a reference image frame. The reference frame may be a previous input image or an output frame from the system. The sharpness metric is based on analyzing edges. The assumption of this problem is that input images are similar to each other in terms of observation angle and time.
Subpixel scene registration is useful for certain image processing applications. The scene shifts do not necessarily have to be integer shifts. In this paper, we present an image registration approach that is based on the wavelet decomposition and the Fitts correlation algorithm. The original Fitts algorithm is ideal for small-scale translations. A successful image-based tracker using Fitts correlation for position measurement will require additional modifications to the original algorithm to enable it to perform the small-scale translations. We used a wavelet transform to preprocess the images
before performing scene registration.
KEYWORDS: Radar, Monte Carlo methods, Signal processing, Computer simulations, Antennas, Radar signal processing, Target detection, Signal detection, Computing systems, Weapons
Thales-Raytheon Systems' Firefinder PC Simulation (PCS) tool allows a rapid simulated evaluation of Firefinder radar
performance from a personal desktop computer. Firefinder radars are designed to track hostile rocket, artillery and
mortar projectiles in order to accurately estimate weapon ground location. The Firefinder tactical code is used within
PCS. This design provides a low risk path to rapid prototyping and evaluation of candidate software changes. PCS is
used to evaluate candidate software changes to the Firefinder. Candidate design changes which perform well in PCS
testing require minimum system level checkout before being checked into the tactical software baseline. The PCS tool
contains a simulation engine which reads program control information from input data files. The PCS tool also generates
and maintains simulated targets and clutter, simulates the radar signal processing function, performs Monte-Carlo
"batch" processing, produces complex target trajectories internally or from an input text file and creates simulation data
recording files identical in format to those created by the actual radar.
Subpixel scene registration is useful for certain image processing applications. In one image processing application,
image frame integration can use frame registration. The scene shifts does not necessarily have to be integer shifts. In
this paper, we present an image registration approach that is based on the wavelet decomposition and the Fitts correlation
algorithm. The original Fitts algorithm is ideal for small-scale translations. A successful image-based tracker using Fitts
correlation for position measurement will require additional modifications to the original algorithm to enable it to
operate the small-scale translations.
Comparing two similar images is often needed to evaluate the effectiveness of an image processing algorithm. But,
there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak signal to noise
ratio (PSNR) are used. These measures rely entirely on pixel intensities. Though these measures are well understood
and easy to implement, they do not correlated well with perceived image quality. This paper will present an image
quality metric that analyzes image structure rather than entirely on pixels. It extracts image structure with the use of a
recursive quadtree decomposition. A similarity comparison function based on contrast, luminance, and structure will be
presented.
There are many uses of an image quality measure. It is often used to evaluate the effectiveness of an image processing
algorithm, yet there is no one widely used objective measure. It can be used to compare similarity between two-dimensional
data. In many papers, the mean squared error (MSE) or peak signal to noise ratio (PSNR) are used. These
measures rely on pixel intensities instead of image structure. Though these measures are well understood and easy to
implement, they do not correlate well with perceived image quality. This paper will present an image quality metric that
analyzes image structure rather than entirely on pixels. It extracts image structure with the use of quadtree
decomposition. A similarity comparison function based on contrast, luminance, and structure will be presented.
Some image processing applications require an image to meet a quality metric before performing processing on
it. If an image is too degraded such that it is difficult or impossible to reconstruct, the input image may be
discarded. When conditions do not exhibit time-invariant image degradations, it is necessary to determine how
sharp an image is. In this paper, we present a metric that measures the relative sharpness with respect to a
reference image frame. The reference image frame may be a previous input image or even an output frame from
the image processor. The sharpness metric is based on analyzing edges. The assumption of this problem is that
input images are similar to each other in terms of observation angle and time. Although the input images are
similar, it cannot be assumed that all input images are the same, because they are collected at different time
samples.
There are many uses of an image quality measure. It is often used to evaluate the effectiveness of an image processing
algorithm, yet there is no one widely used objective measure. In many papers, the mean squared error (MSE) or peak
signal to noise ratio (PSNR) are used. Though these measures are well understood and easy to implement, they do not
correlate well with perceived image quality. This paper will present an image quality metric that analyzes image
structure rather than entirely on pixels. It extracts image structure with the use of quadtree decomposition. A similarity
comparison function based on contrast, luminance, and structure will be presented.
In this paper, we will discuss a technique of texture image classification using a wavelet decomposition with selective
wavelet packet node decomposition. This new approach uses a two-channel wavelet decomposition which is extended to
two dimensions. Using the strength as a metric, selective wavelet decomposition is controlled. The metric is used to
allow further decomposition or to terminate recursive decompositions. Decision of continuing further decompositions is
based on each subband's strength with respect to the strengths of other subbands of the same wavelet decomposition
level. Once the decompositions stop, the structure of the packet is stored in a data structure. Using the information
from the data structure, dominating channels are extracted. These are defined as paths from the root of the packet to the
leaf with the highest strengths. The list of dominating channels are used to train a learning vector quantization neural
network.
Classification of image segments on textures can be helpful for target recognition. Sometimes target cueing is
performed before target recognition. Textures are sometimes used to cue an image processor of a potential region of
interest. In certain imaging sensors, such as those used in synthetic aperture radar, textures may be abundant. The
textures may be caused by the object material or speckle noise. Even speckle noise can create the illusion of texture,
which must be compensated in image pre-processing. In this paper, we will discuss how to perform texture
classification but constrain the number of wavelet packet node decomposition. The new approach performs a twochannel
wavelet decomposition. Comparing the strength of each new subband with others at the same level of the
wavelet packet determines when to stop further decomposition. This type of decomposition is performed recursively.
Once the decompositions stop, the structure of the packet is stored in a data structure. Using the information from the
data structure, dominating channels are extracted. These are defined as paths from the root of the packet to the leaf with
the highest strengths. The list of dominating channels are used to train a learning vector quantization neural network.
Video fields are commonly labeled "odd" and "even", depending on the order of field. Odd fields contain scan lines that correspond to the odd lines of the video frame. Even fields contain scan lines that correspond to the even lines of the video frames. One odd field and one even field are used to create a video frame. Deinterlacing algorithms convert video from the interlaced scan format to the progressive scan format. This paper presents a deinterlace metric which evaluates the performance of a deinterlacing algorithm. The metric is based on a frame processed with a deinterlacing algorithm. The described approach is a no-reference metric, meaning that the quality measure is not dependent on previously deinterlaced frames. In previous literature, the mean squared error (MSE) is frequently used as a performance metric for deinterlacing. MSE does not necessarily correlate with the effectiveness of the deinterlacing. Rather than using MSE, our metric is based on high frequency components of deinterlaced frame. We found that the metric corresponds well with subjective testing and is therefore suitable for quick qualitative characterization of deinterlaced frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.