PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF contains front matter associated with SPIE Proceedings Volume 8020, including title page, copyright information, table of contents, and the conference committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new approach for the design and fabrication of a miniaturized SWIR Hyperspectral imager is described.
Previously, good results were obtained with a VNIR Hyperspectral imager, by use of light propagation
within bonded solid blocks of fused silica. These designs use the Offner design form, providing excellent,
low distortion imaging. The same idea is applied to the SWIR Hyperspectral imager here, resulting in a
microHSITM SWIR Hyperspectral sensor, capable of operating in the 850-1700 nm wavelength range. The
microHSI spectrometer weighs 910 g from slit input to camera output. This spectrometer can
accommodate custom foreoptics to adapt to a wide range of fields-of-view (FOV). The current application
calls for a 15 degree FOV, and utilizes an InGaAs image sensor with a spatial format of 640 x 25 micron
pixels. This results in a slit length of 16 mm, and a foreoptics focal length of 61 mm, operating at F# = 2.8.
The resulting IFOV is 417 μrad for this application, and a spectral dispersion of 4.17 nm/pixel. A
prototype SWIR microHSI was fabricated, and the blazed diffraction grating was embedded within the
optical blocks, resulting in a 72% diffraction efficiency at the wavelength of 1020 nm. This spectrometer
design is capable of accommodating slit lengths of up to 25.6 mm, which opens up a wide variety of
applications. The microHSI concepts can be extended to other wavelength regions, and a miniaturized
LWIR microHSI sensor is in the conceptual design stage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A unique, hyperspectral imaging plane "on-a-chip" developed for deployment as a High Performance Payload (HPP) on
a micro or small unmanned aerial vehicle is described. HPP employs nanophotonics technologies to create a focal plane
array with very high fill factor fabricated using standard integrated circuit techniques. The spectral response of each
pixel can be independently tuned and controlled over the entire spectral range of the camera. While the current HPP is
designed to operate in the visible, the underlying physical principles of the device are applicable and potentially
implementable from the UV through the long-wave infrared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that non-uniform illumination of a spectrometer changes the measured spectra. Laboratory calibration
of hyperspectral imaging systems is careful to minimize this effect by providing repeatable, uniform illumination. In
hyperspectral measurements the real world images result in non-uniform illumination. We define the resulting variation
as real-world noise and we compare real-world noise to other noise sources. Both in-flight performance and calibration
transfer between instruments degrade significantly because of real-world noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An instrument for monocular passive ranging based on atmospheric oxygen absorption near 762 nm has been designed,
built and deployed to track emissive targets. An intensified CCD array is coupled to variable band pass liquid crystal
filter and 3.5 - 8.8 degree field of view optics. The system was first deployed for a ground test viewing a static jet engine
in afterburner at ranges of 0.35 - 4.8 km, establishing a range error of 15%. The instrument was also flight tested in a C-12 imaging an the exhaust plume of another aircraft afterburner at ranges up to 11 km.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel charge coupled device (CCD) array enables the combination of imaging and semi-active laser (SAL) target
designation to enhance seeker functionality at reduced inventory cost with lower collateral damage risk. The integration
of SAL detection with imaging requires a high level of spatial and temporal resolution of the laser pulse detector and its
correlation with the field of view of the imaging sensor so that laser spot location and code are presented with the image
in real time. This evaluation of a novel SAL CCD detector concept shows that it is possible to achieve a temporal
resolution in the region of 5μsec, an order of magnitude better than the basic requirement, and to achieve sensitivity to
the laser pulse that allows operation in direct sunlight. The analysis indicates that the SAL CCD meets requirements
using standard CCD processes. This paper reviews the detector architecture options and shows how the temporal, spatial
and sensitivity requirements can be met.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A typical airborne ground surveillance radar is a multimode system with a ground moving target indicator (GMTI) mode
for surveillance and tracking of moving ground targets and synthetic aperture radar (SAR) modes for imaging of terrain
features and stationary ground targets. One of the key features of the GMTI mode is the ability to perform wide area
surveillance (WAS) of a substantial ground area, and in addition to provide persistent surveillance of a pre-specified
ground area over a long period of time. The accomplishment of this task requires careful optimization of radar
parameters and careful planning of the platform orbits so as to minimize the time spent turning the aircraft and
repositioning the radar. This paper defines the notion of surveillance orbit efficiency which, for constant speed flight, is
simply the percentage of time spent on the straight legs of a race track orbit. It then examines the orbit efficiency for
each of three cases depending on the assumed radar azimuth field of view (FOV). This paper is a modified version of
work described in a MITRE Technical Report for the US Army.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Navy recently began investing in the design of mission-specific payloads for the Small Tactical Unmanned Aircraft
System (STUAS). STUAS is a Tier II size UAS with a roughly 35 pound mission payload and a gimbaled general-purpose
electro optical/infrared (EO/IR) system. The EO/IR system is likely composed of a video camera in the visible,
a mid-wave infrared (MWIR) and/or a long-wave infrared (LWIR) for night operations, and an infrared marker and laser
range finder.
Advanced Coherent Technologies, LLC (ACT), in a series of SBIR efforts, has developed a modular, multi-channel
imaging system for deployment on airborne and UAV platforms. ACT's system, called EYE5, demonstrates how an
EO/IR system combined with an on-board, real-time processor can be tailored for specific applications to produce real-time
actionable data. The EYE5 sensor head and modular real-time processor descriptions are presented in this work.
Examples of the system's abilities in various Navy-relevant applications are reviewed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to meet volume requirement and provide high image quality for a Long Range Oblique Photography (LOROP)
system, we adopted Cassegrain-type telescope with lens compensators for the operation in both regions of 0.6 ~ 0.9 μm
(EO channel) and 3.7 ~ 4.8 μm (IR channel). To provide dual-band functionality, the tilted plane-parallel plate is applied
and acts as a beam splitter located in the space between primary and secondary mirrors. The system is near to telecentric
in detector space (EO) and telecentric in intermediate image space (IR). The telecentricity provides image height
constancy while adjusting the focus. The optical system includes Back Scan Mechanism (BSM) to compensate image
blurring for integration time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of imagery simultaneously collected from sensors of disparate modalities enhances an image analyst's
situational awareness and expands the overall detection capability to a larger array of target classes. Dynamic
cooperation between sensors is increasingly important for the collection of coincident data from multiple sensors either
on the same or on different platforms suitable for UAV deployment. Of particular interest is autonomous collaboration
between wide area survey detection, high-resolution inspection, and RF sensors that span large segments of the
electromagnetic spectrum. The Naval Research Laboratory (NRL) in conjunction with the Space Dynamics Laboratory
(SDL) is building sensors with such networked communications capability and is conducting field tests to demonstrate
the feasibility of collaborative sensor data collection and exploitation. Example survey / detection sensors include:
NuSAR (NRL Unmanned SAR), a UAV compatible synthetic aperture radar system; microHSI, an NRL developed
lightweight hyper-spectral imager; RASAR (Real-time Autonomous SAR), a lightweight podded synthetic aperture
radar; and N-WAPSS-16 (Nighttime Wide-Area Persistent Surveillance Sensor-16Mpix), a MWIR large array gimbaled
system. From these sensors, detected target cues are automatically sent to the NRL/SDL developed EyePod, a high-resolution,
narrow FOV EO/IR sensor, for target inspection. In addition to this cooperative data collection, EyePod's
real-time, autonomous target tracking capabilities will be demonstrated. Preliminary results and target analysis will be
presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical links, based on coherent homodyne detection and BPSK modulation with bidirectional data transmission of 5.6
Gbps over distances of about 5,000 km and BER of 10-8, have been sufficiently verified in space. The verification results
show that this technology is suitable not only for space applications but also for applications in the troposphere.
After a brief description of the Laser Communication Terminal (LCT) for space applications, the paper consequently
discusses the future utilization of satellite-based optical data links for Beyond Line of Sight (BLOS) operations of High
Altitude Long Endurance (HALE) Unmanned Aerial Vehicles (UAV). It is shown that the use of optical frequencies is
the only logical consequence of an ever-increasing demand for bandwidth. In terms of Network Centric Warfare it is
highly recommended that Unmanned Aircraft Systems (UAS) of the future should incorporate that technology which
allows almost unlimited bandwidth. The advantages of optical communications especially for Intelligence, Surveillance
and Reconnaissance (ISR) are underlined. Moreover, the preliminary design concept of an airborne laser communication
terminal is described. Since optical bi-directional links have been tested between a LCT in space and a TESAT Optical
Ground Station (OGS), preliminary analysis on tracking and BER performance and the impact of atmospheric
disturbances on coherent links will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Muzzle blast trajectories from firings of a 152 mm caliber gun howitzer were obtained with high-speed optical imagers
and used to assess the fidelity with which low dimensionality models can be used for data reduction. Characteristic flow
regions were defined for the blast waves. The near-field region was estimated to extend to 0.98 - 1.25 meters from the
muzzle and the far-field region was estimated to begin at 2.61 - 3.31 meters. Blast wave geometries and radial
trajectories were collected in the near through far-fields with visible imagers operating at 1,600 Hz. Beyond the near-field
the blast waves exhibited a near-spherical geometry in which the major axis of the blast lay along the axis of the
gun barrel and measured within 95% of the minor axis. Several blast wave propagation models were applied to the mid
and far-field data to determine their ability to reduce the blast wave trajectories to fewer parameters while retaining the
ability to distinguish amongst three munitions configurations. A total of 147 firings were observed and used to assess
within-configuration variability relative to separation between configurations. Results show that all models perform well,
and drag and point blast model parameters additionally provide insight into phenomenology of the blast.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The certification and testing of new airborne structures is a costly undertaking. This paper presents which measures can
be taken to limit the cost and certification required in order to improve the capabilities of the current airborne as-sets, by
applying a building block approach to the design and certification of airborne pod structures.
A simple way of improving aircraft capabilities is by adding external pod structures, which has been performed for many
applications over many years. However, this paper describes a truly modular approach, in which a typical airborne pod
structure may be reconfigured to many various roles, with only limited re-certification requirements.
Using existing or general aerodynamic shapes, the basic outer shape for the external store is defined, which is then
combined with a modular substructure which can accommodate a large variety of electronic and/or optical sensors. This
also allows the airborne pod structure to perform several intelligence collecting operations during the same sortie,
thereby limiting the time spent near the danger area.
The re-use of existing substructure modules reduces the cost and leadtime of the design phase allowing for a rapid entry
into service. The modular design, relying on proven interface systems between the building blocks, significantly reduces
risk involved in new programs.
The certification process is also discussed in order to optimize the use of the pod structure modularity and certification
requirements in order to simplify the certification task, by drawing similarity to existing designs.
Finally the paper covers how modularity is implemented in new composite pod designs with stealth capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research efforts at Georgia Tech have focused on the development of a multi-resolution ocean clutter model. This
research was driven by the need to support both surveillance and search requirements set by several government
customers. These requirements indicated a need to support target detection and tracking for both resolved and unresolved
scenarios for targets located either above or on an ocean surface. As a result of this changing sensor resolution
characteristic for the various acquisition scenarios, a need for accurate ocean surface models at different geometric
resolutions arose. Georgia Tech met this need through development of a multi-resolution approach to modeling both the
ocean surface and, subsequently, the ocean signature across the optical spectrum. This approach combined empirical
overhead data with high resolution ocean surface models to construct a series of varying resolution ocean clutter models.
This paper will describe the approach to utilizing and merging the various clutter models as well as the results of using
these models in the target detection and tracking analysis. Remaining issues associated with this clutter model
development will be identified and potential solutions discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many civilian and military applications, early warning IR detection systems have been developed over the
years to detect long-range targets in scenarios characterized by highly structured background clutter. In this
framework, a well-established detection scheme is realized with two cascaded stages: (i) background clutter
removal, (ii) detection over the residual clutter. The performance of the whole detection system is especially
determined by the choice and setting of the background estimation algorithm (BEA). In this paper, a novel
procedure to automatically select the best performing BEA is proposed which relies on a selection criterion
(BEA-SC) where the performances of the detection system are investigated via-simulation for the available
BEAs and for different values of their parameters setting.
The robustness of the BEA-SC is investigated to examine the performance of the detection system when the
characteristics of the targets in the scene sensibly differ from the synthetic ones used in the BEA-SC, i.e. when
the BEA is not perfectly tuned to the targets of interest in the scene. We consider target detection schemes
that include BEAs based on well-established two-dimensional (2-D) filters. BEA-SC is applied to sequences of
IR images acquired on scenarios typical of surveillance applications. Performance comparison is carried out in
terms of experimental receiver operating characteristics (EX-ROC). The results show that the recently introduced
BEA-SC is robust in the detection of targets whose characteristics are those expected in typical early warning
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Critical factors for MTI performance are determined using a Ground Truth Database containing diverse imagery from a
range of UAVs operating in-theatre. The Minimum Target Size is measured and identified as the most critical
characteristic of the MTI performance envelope. Other factors for MTI performance are discussed. Receiver Operating
Characteristic curves are presented to compare MTI performance w.r.t. false alerts. This methodology provides an
objective measurement of the performance envelope of MTI systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For detecting vehicles in large scale aerial images we first used a non-parametric method proposed recently by Rosin to
define the regions of interest, where the vehicles appear with dense edges. The saliency map is a sum of distance
transforms (DT) of a set of edges maps, which are obtained by a threshold decomposition of the gradient image with a
set of thresholds. A binary mask for highlighting the regions of interest is then obtained by a moment-preserving
thresholding of the normalized saliency map. Secondly, the regions of interest were over-segmented by the SLIC
superpixels proposed recently by Achanta et al. to cluster pixels into the color constancy sub-regions. In the aerial
images of 11.2 cm/pixel resolution, the vehicles in general do not exceed 20 x 40 pixels. We introduced a size constraint
to guarantee no superpixels exceed the size of a vehicle. The superpixels were then classified to vehicle or non-vehicle
by the Support Vector Machine (SVM), in which the Scale Invariant Feature Transform (SIFT) features and the Linear
Binary Pattern (LBP) texture features were used. Both features were extracted at two scales with two size patches. The
small patches capture local structures and the larger patches include the neighborhood information. Preliminary results
show a significant gain in the detection. The vehicles were detected with a dense concentration of the vehicle-class
superpixels. Even dark color cars were successfully detected. A validation process will follow to reduce the presence of
isolated false alarms in the background.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several new techniques are introduced to the component-based vehicle detection in the aerial imagery. The shape-independent
tricolour attenuation model based on the spectral power density difference between the regions lighted by
direct sunlight and/or diffuse skylight is used to identify cast shadows. The simple linear iterative clustering (SLIC)
performs local clustering for superpixels, which were merged by a statistical region merging (SRM) method based on the
independent bounded difference inequality theorem. The car body parts were found with Support Vector Machine based
on the radiometric and geometric features of the segmented regions. All the algorithms used in this approach require
minimum human intervention, providing a robust detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic object detection and tracking has been widely applied in the video surveillance systems for homeland security
and data fusion in the remote sensing and airborne imagery. The typical applications include human motion analysis and
the vehicle detection. Here we implement object detection and tracking under shape graphs of interesting objects
integrating local contextual information (corner/point features, etc) of the objects. On the top layer, shapes/sketches
provide a discrimination measure to describe the global status of the interesting objects. This kind of information is very
useful to improve the object tracking performance for occlusion. The shape can be modeled as a graph or hyper graph
through its local geometric features. On the bottom layer, local geometric features are used to capture local properties of
objects and perform correspondence estimation of high-level shapes. The local features provide a way to conquer
inaccurate object segmentation and extraction. The experiments were implemented on human face tracking and vehicle
detection and tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In video tracking systems using image subtraction for motion detection, the global motion is usually estimated to compensate for the camera motion. The accuracy and robustness of the global motion compensation critically affects the performance of the target tracking process. The global motion between video frames can be estimated by matching the features from the image background. However, the features from moving targets contain both camera and target motion and should not be used to calculate the global motion. Sparse optical flow is a classical image matching method. However, the image features selected by optical flow may come from moving targets, with some of the image features matched not being accurate, which leads to poor video tracking performance. Least Median of Square (LMedS) is a popular robust linear regression model and has been applied to real-time video tracking systems implemented in hardware to process up to 7.5 frames/second. In this paper, we use a robust regression method to select features only from the image background for robust global motion estimation, and we develop a real-time (10 frames/second), software-based video tracking system that runs on an ordinary Windows-based general-purpose computer. The software optimization and parameter tuning for real-time execution are discussed in detail. The tracking performance is evaluated with real-world Unmanned Air Vehicle (UAV) video, and we demonstrate the improved global motion estimation in terms of accuracy and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real-world outdoor video, moving targets such as vehicles and people may be partially or fully occluded by
background objects such as buildings and trees, which makes tracking them continuously a very challenging
task. In the present work, we present a system to address the problem of tracking targets through occlusions in
a motion-based target detection and tracking framework. For an existing track that is fully occluded, a Kalman
filter is applied to predict the target's current position based upon its previous locations. However, the prediction
may drift from the target's true trajectory due to accumulated prediction errors, especially when the occlusion
is of long duration. To address this problem, tracks that have disappeared are checked with an extra data
association procedure that evaluates the potential association between the track and the new detections, which
could be a previously tracked target that is just coming out of occlusion. Another issue that arises with motion-based
tracking is that the algorithm may consider the visible part of a partially occluded target as the entire
target region. This is problematic because an inaccurate target motion trajectory model will be built, causing
the Kalman filter to generate inaccurate target position predictions, which can yield a divergence between the
track and the true target trajectory. Accordingly, we present a method that provides reasonable estimates of the
partially-occluded target centers. Experimental results conducted on real-world unmanned air vehicle (UAV)
video sequences demonstrate that the proposed system significantly improves the track continuity in various
occlusion scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Presented is a method to blindly estimate the location of a transmitter from the signal observed by a single
moving receiver. This process is based on the observation that the observed Doppler characteristics are essentially
uniquely determined by the transmission frequency, the location of the transmitter, and the time-varying flight
path of the receiver. We accurately estimate the instantaneous frequency of the received signal and blindly
calculate the transmitted frequency from the received signal and the instantaneous position and velocity of the
receiver. The transmitter location is then estimated by minimizing a cost function representing the difference
between the Doppler characteristics calculated from the relative geometry of the transmitter and receiver and
the Doppler characteristics estimated from the received signal. The method has the advantages that only one
receiving antenna is required and the emitter may be located with no a priori knowledge of the emitter location
or frequency. In addition, the process is essentially independent of the flight path of the receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object tracking is a direct or indirect key issue in many different military applications like visual surveillance,
automatic visual closed-loop control of UAVs (unmanned aerial vehicles) and PTZ-cameras, or in the field of
crowd evaluations in order to detect or analyse a riot emergence. Of course, a high robustness is the most
important feature of the underlying tracker, but this is hindered significantly the more the tracker needs to have
low calculation times. In the UAV application introduced in this paper the tracker has to be extraordinarily
quick.
In order to optimize the calculation time and the robustness in combination as far as possible, a highly efficient
tracking procedure is presented for the above mentioned application fields which relies on well-known color
histograms but uses them in a novel manner. This procedure bases on the calculation of a color weighting vector
representing the significances of object colors like a kind of an object's color finger print. Several examples from
the above mentioned military applications are shown to demonstrate the practical relevance and the performance
of the presented tracking approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The effect of low sample frame rate on interpretability is often confused with the impact it has on encoding processes.
In this study, the confusion was avoided by ensuring that none of the low-frame rate clips had coding artifacts. Under
these conditions, the lowered frame rate was not associated with a statistically significant change in interpretability.
Airborne, high definition 720P, 60 FPS video clips were used as source material to produce test clips with varying
sample frame rates, playback rates, and degrees of target motion. Frame rates ranged from 7.5 FPS to 60 FPS.
Playback rates ranged up to 8X normal speed. Target motion ranged from near zero MPH up to 300 MPH.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern day remote video cameras enjoy the ability of producing quality video streams at extremely high resolutions.
Unfortunately, the benefit of such technology cannot be realized when the channel between the sensor and the operator
restricts the bit-rate of incoming data. In order to cram more information into the available bandwidth, video
technologies typically employ compression schemes (e.g. H.264/MPEG 4 standard) which exploit spatial and temporal
redundancies. We present an alternative method utilizing region of interest (ROI) based compression. Each region in the
incoming scene is assigned a score measuring importance to the operator. Scores may be determined based on the
manual selection of one or more objects which are then automatically tracked by the system; or alternatively, listeners
may be pre-assigned to various areas that trigger high scores upon the occurrence of customizable events. A multi-resolution
wavelet expansion is then used to optimally transmit important regions at higher resolutions and frame rates
than less interesting peripheral background objects subject to bandwidth constraints. We show that our methodology
makes it possible to obtain high compression ratios while ensuring no loss in overall situational awareness. If combined
with modules from traditional video codecs, compression ratios of 100:1 to 1000:1, depending on ROI size, can easily be
achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a new algorithm for efficient compression of sequences of medical images, based on the Inverse
Pyramid Decomposition, called Group coding. The same approach is adapted for the efficient archiving of multispectral
images as well. The algorithm is based on joint processing of all images in a group, representing the same object, and
obtained using sensors of changing light length (multispectral images) or after time intervals. The background of the new
approach is the use of the Inverse Pyramid Decomposition, which performs leveled image representation with increasing
quality of the approximations obtained in the consecutive decomposition levels. The coarsest approximation of one of
the images in the group, selected to be used as a reference one, is used to calculate the next (better) approximations of
the remaining images in the group. As a result, is obtained efficient compression of the processed groups of images,
which is of high importance for their efficient archiving and storage in image databases. Numerous experiences were
performed with satellite and medical images, which proved the method efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most non-conventional approaches to image restoration of scenes observed over long atmospheric slant
paths require multiple frames of short exposure images taken with low noise focal plane arrays. The
individual pixels in these arrays often exhibit spatial non-uniformity in their response. In addition base
motion jitter in the observing platform introduces a frame-to-frame linear shift that must be compensated
for in order for the multi-frame restoration to be successful. In this paper we describe a maximum a-posteriori
parameter estimation approach to the simultaneous estimation of the frame-to-frame shifts and
the array non-uniformity. This approach can be incorporated into an iterative algorithm and implemented
in real time as the image data is being collected. We present a brief derivation of the algorithm as well as
its application to actual image data collected from an airborne platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe
distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR)
and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force
multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras
and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams
is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising
video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement
suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We
provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These
results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target
recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for
human factors research into the effects of enhancement algorithms on ISR missions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for
troop protection, situational awareness, mission planning, damage assessment, and others. Unmanned Aerial Vehicles
(UAVs) gather huge amounts of video data but it is extremely labour-intensive for operators to analyze hours and hours
of received data.
At MDA, we have developed a suite of tools that can process the UAV video data automatically, including mosaicking,
change detection and 3D reconstruction, which have been integrated within a standard GIS framework. In addition, the
mosaicking and 3D reconstruction tools have also been integrated in a Service Oriented Architecture (SOA) framework.
The Visualization and Exploitation Workstation (VIEW) integrates 2D and 3D visualization, processing, and analysis
capabilities developed for UAV video exploitation. Visualization capabilities are supported through a thick-client
Graphical User Interface (GUI), which allows visualization of 2D imagery, video, and 3D models. The GUI interacts
with the VIEW server, which provides video mosaicking and 3D reconstruction exploitation services through the SOA
framework.
The SOA framework allows multiple users to perform video exploitation by running a GUI client on the operator's
computer and invoking the video exploitation functionalities residing on the server. This allows the exploitation services
to be upgraded easily and allows the intensive video processing to run on powerful workstations.
MDA provides UAV services to the Canadian and Australian forces in Afghanistan with the Heron, a Medium Altitude
Long Endurance (MALE) UAV system. On-going flight operations service provides important intelligence,
surveillance, and reconnaissance information to commanders and front-line soldiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Predicting ground vehicle performance requires in-depth knowledge, captured as numeric parameters, of the
terrain on which the vehicles will be operating. For off-road performance, predictions are based on rough terrain
ride comfort, which is described using a parameter entitled root-mean-square (RMS) surface roughness. Likewise,
on-road vehicle performance depends heavily on the slopes of the individual road segments. Traditional methods
of computing RMS and road slope values call for high-resolution (inch-scale) surface elevation data. At this
scale, surface elevation data is both difficult and time consuming to collect. Nevertheless, a current need exists
to attribute large geographic areas with RMS and road slope values in order to better support vehicle mobility
predictions, and high-resolution surface data is neither available nor collectible for many of these regions. On the
other hand, meter scale data can be quickly and easily collected for these areas using unmanned aerial vehicle
(UAV) based IFSAR and LIDAR sensors. A statistical technique for inferring RMS values for large areas using
a combination of fractal dimension and spectral analysis of five-meter elevation data is presented. Validation of
the RMS prediction technique was based on 43 vehicle ride courses with 30-centimeter surface elevation data.
Also presented is a model for classifying road slopes for long road sections using five-meter elevation data. The
road slope model was validated against one-meter LIDAR surface elevation profiles. These inference algorithms
have been successfully implemented for regions of northern Afghanistan, and some initial results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Separate tracking of objects such as people and the luggages they carry is important for video surveillance applications
as it would allow making higher level inferences and timely detection of potential threats. However, this is a challenging
problem and in the literature, people and objects they carry are tracked as a single object. In this study, we propose using
thermal imagery in addition to the visible band imagery for tracking in indoor applications (such as airports, metro or
railway stations). We use adaptive background modeling in association with mean-shift tracking for fully automatic
tracking. Trackers are refreshed using the background model to handle occlusion and split and to detect newly emerging
objects as well as objects that leave the scene. Visible and thermal domain tracking information are fused to allow
tracking of people and the objects they carry separately using their heat signatures. By using the trajectories of these
objects, interactions between them could be deduced and potential threats such as abandoning of an object by a person
could be detected in real-time. Better tracking performance is also achieved compared to using a single modality as
thermal reflection and halo effect which adversely affect tracking are eliminated by the complementing visible band data.
The proposed method has been tested on videos containing various scenarios. The experimental results show that the
presented method is effective for separate tracking of objects such as people and their belongings and for detecting the
interactions in the presence of occlusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is increasing interest in imaging spectrometers working in the SWIR and LWIR wavelength bands. Commercially
available detectors are not only expensive, but have a limited number of pixels, compared with visible band detectors.
Typical push broom hyperspectral imaging systems consist of a fore optic imager, a slit, a line spectrometer, and a two
dimensional focal plane with a spatial and spectral direction. To improve the spatial field coverage at a particular
resolution, multiple systems are incorporated, where the "linear fields of view" of the systems are aligned end to end.
This solution is prohibitive for many applications due to the costs of the multiple detectors, coolers, spectrometers, or the
space, weight, or power constraints. Corning will present a cost effective solution utilizing existing detectors combined
with innovative design and manufacturing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in digital photography have enabled the development and demonstration of plenoptic cameras with
impressive capabilities. They function by recording sub-aperture images that can be combined to re-focus images
or to generate stereoscopic pairs.
Plenoptic methods are being explored for fusing images from distributed arrays of cameras, with a view toward
applications in which hardware resources are limited (e.g. size, weight, power constraints). Through computer
simulation and experimental studies, the influences of non-idealities such as camera position uncertainty are being
considered. Component image rescaling and balancing methods are being explored to compensate. Of interest is
the impact on precision passive ranging and super-resolution. In a preliminary experiment, a set of images from a
camera array was recorded and merged to form a 3D representation of a scene. Conventional plenoptic refocusing
was demonstrated and techniques were explored for balancing the images. Nonlinear methods were explored for
combining the images limited the ghosting caused by sub-sampling.
Plenoptic processing was explored as a means for determining 3D information from airborne video. Successive
frames were processed as camera array elements to extract the heights of structures. Practical means were
considered for rendering the 3D information in color.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.