PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Daniel J. Henry,1 Davis A. Lange,2 Dale Linne von Berg,3 S. Danny Rajan,4 Thomas J. Walls,3 Darrell L. Young5
1Rockwell Collins, Inc. (United States) 2UTC Aerospace Systems (United States) 3U.S. Naval Research Lab. (United States) 4Exelis Visual Information Solutions (United States) 5Raytheon Intelligence & Information Systems (United States)
This PDF file contains the front matter associated with SPIE Proceedings Volume 8713, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fighter aircrafts are generally designed for attack, defense or reconnaissance (recce) missions. Since unmanned systems like UAV’s and satellites are used for recce, the manned tactical recce assets will not be in produce anymore. In this situation, recce missions will be performed by any strike aircraft, as manned systems. Since UAVs designed for recce missions can only fly at lower speed and satellites can take photo at desired point lately, both of them are incapable of especially post attack recce needs. At this point, the importance of tactical manned recce, which has capability of high speed, all weather condition and low altitude is easily understood. Yet, a recce missions performed by a strike fighter reveals a concept dilemma between ISR and Non Traditional ISR (NTISR). For instance, if this aircraft takes information for recce with the help of its radar, SAR, targeting pods, this consept is called NTISR. But, if it executes mission with recce pods, this is called ISR. So, the question:"what kind of a recce architecture can solve this problem?" is defined as the main objective of the study.
In this study, Turkish Air Force (TurAF) recce architecture is analyzed and Gadget model is built as a new one. It provides modern strike aircraft to be included in architecture as a main recce systems, which is divided into two parts as "soft recce" and "smart recce” to execute "recce-based-attack". Furthermore, NTISR is defined as “Auxiliary Systems For Recce” and ISR-NTISR dilemma is solved with the help of 7-year-experience as reconnaissance pilot.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vehicle intrusion is considered a significant threat for critical zones specially the militarized zones and therefore vehicles monitoring has a great importance. In this paper a small wireless sensor network for vehicle intrusion monitoring consists of a five inexpensive sensor nodes distributed over a small area and connected with a gateway using star topology has been designed and implemented. The system is able to detect a passage of an intrusive vehicle, classify it either wheeled or tracked, and track the direction of its movement. The approach is based on Vehicle’s ground vibrations for detection, vehicle’s acoustic signature for classification and the Energy- based target localization for tracking. Detection and classification are implemented by using different algorithms and techniques including Analog to Digital Conversion, Fast Fourier Transformation (FFT) and Neural Network .All of these algorithms and techniques are implemented locally in the sensor node using Microchip dsPIC digital signal controller. Results are sent from the sensor node to the gateway using ZigBee technology and then from the gateway to a web server using GPRS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using its patented VQ™ finishing process, Raytheon EO Innovations has been producing low-scatter, low-figure and affordable aluminum 6061-based mirrors for long stand-off intelligence, surveillance and reconnaissance (ISR) systems in production since 2005. These common aperture multispectral systems require λ/30 root mean square (RMS) surface figure and sub-20Å RMS finishes for optimal visible imaging performance. This paper discusses the process results, scatter performance, and fabrication capabilities of Multispectral Reflective Lightweight Optics Technology (MeRLOT™), a new lightweight substrate material. This new technology enables lightweight, common-aperture, broadband performance that can be put in the hands of the warfighter for precision targeting and surveillance operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A lightweight single-aperture and multi-spectral sensor operating from Visible to LWIR has been designed, manufactured and tested exploiting a Three Mirror Anastigmat (TMA) telescope featuring thin free-form mirrors electroformed from negative masters. Manufacturing complexity is in place only for the master realization, the contribution of which to the sensor cost decreases with the number of replicas. The TMA, suitable for airborne surveillance applications, has F/no. 1.4, focal length 136 mm and field of view 4.3° × 3.1°, and provides two channels, in the MWIR-LWIR and in the visible waveband. The nominal contrast is better than 75% in the visible at 25 cycles/mm. Electroformed 1 mm thick mirrors keep the sensor mass below 3 kg. Stray light and thermo-structural design has been done to comply with airborne conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of small Unmanned Air Vehicles (UAVs) in the past decade has been driven, in part, by the diverse applications that various industries have found for these platforms. Originally, these applications were predominately military in nature but now include law enforcement/security, environmental monitoring/remote sensing, agricultural surveying, movie making and others. Many of these require sensors/payloads such as cameras, laser pointers/ illuminators/rangefinders and other systems that must be pointed and/or stabilized and therefore require a precision miniature gimbal or other means to control their line-of-sight (LOS). Until now, these markets have been served by traditional/larger gimbals; however, the latest class of small UAVs demands much smaller gimbals while maintaining high-performance. The limited size and weight of these gimbaled devices result in design challenges unique to the small-gimbal design field. In the past five years, Ascendant Engineering Solutions has engaged in designing, analyzing and building several small-gimbal systems to meet these challenges and has undertaken a number of trade studies to investigate techniques to achieve optimal performance within the inherent limitations mentioned above. These have included investigating various gimbal configurations, feedback sensors such as gyros, IMUs and encoders, drive train configurations, control system techniques, packaging and interconnect, as well as technology such as fast-steering mirrors and image-stabilization algorithms. This paper summarizes the results of these trade studies, attempts to identify inherent trends and limitations in the various design approaches and techniques, and discusses some practical issues such as test and verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designed to execute mapping and surveillance missions for crisis monitoring on a solar powered High Altitude Long endurance UAV 18 km high up in the stratosphere, the MEDUSA high resolution camera is able to acquire frame images with a ground sampling distance of 30 cm and swath of 3 km. Since mass is a dominant driver for the UAV performance the MEDUSA payload was severely mass optimised to fit within the physical boundaries of 2.6 kg, 12 cm diameter and 1 m length. An inertial navigation system and data transmission equipment is included. Due to the innovative dual sensor on single chip concept the MEDUSA payload hosts two independent frame camera’s of each 10000x1200 pixels (one panchromatic and one colour sensitive). The MEDUSA stratospheric camera has completed its system level test campaign in autumn 2012 and is ready for its maiden flight.
Using the main building blocks of this stratospheric camera a modified version is being developed which is adapted to more conventional UAV’s flying at lower altitude. The current design is targeting a ground resolution of 10 cm and swath of 1 km with each single image. First test flights have been conducted with an engineering model version of the camera generating representative image data. Also the functionality is being expanded by adding hyperspectral sensitivity to high spatial resolution image acquisition within the same compact camera system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Free Space Optical Communications (FSOC) is progressing continuously. With the successful in-orbit verification of a
Laser Communication Terminal (LCT), the coherent homodyne BPSK scheme advanced to a standard for Free-Space
Optical Communication (FSOC) which now prevails more and more. The LCT is located not only on satellites in Low
Earth Orbit (LEO), with spacecrafts like ALPHASAT-TDP and the European Data Relay Satellite (EDRS) the LCT will
also exist in Geosynchronous Orbit (GEO) in the near future. In other words, the LCT has reached its practical
application.
With existence of such space assets the time has come for other utilizations beyond that of establishing optical Inter-Satellite Links (ISL). Aeronautical applications, as for instance High Altitude Long Endurance (HALE) or Medium
Altitude Long Endurance (MALE) Unmanned Aerial Systems (UAS) have to be addressed. Driving factors and
advantages of FSOC in HALE/MALE UAS missions are highlighted. Numerous practice-related issues are described
concerning the space segment, the aeronautical segment as well as the ground segment. The advantages for UAS
missions are described resulting from the utilization of FSOC exclusively for wideband transmission of sensor data
whereas vehicle Command and Control can be maintained as before via RF communication. Moreover, the paper discusses
FSOC as enabler for the integration of air and space-based wideband Intelligence, Surveillance and Reconnaissance (ISR)
systems into existent military command and control systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims at developing a new technology that will enable one to conduct an autonomous and silent surveillance to
monitor sound sources stationary or moving in 3D space and a blind separation of target acoustic signals. The underlying
principle of this technology is a hybrid approach that uses: 1) passive sonic detection and ranging method that consists of
iterative triangulation and redundant checking to locate the Cartesian coordinates of arbitrary sound sources in 3D space,
2) advanced signal processing to sanitizing the measured data and enhance signal to noise ratio, and 3) short-time source
localization and separation to extract the target acoustic signals from the directly measured mixed ones. A prototype based
on this technology has been developed and its hardware includes six B and K 1/4-in condenser microphones, Type 4935, two
4-channel data acquisition units, Type NI-9234, with a maximum sampling rate of 51.2kS/s per channel, one NI-cDAQ
9174 chassis, a thermometer to measure the air temperature, a camera to view the relative positions of located sources,
and a laptop to control data acquisition and post processing. Test results for locating arbitrary sound sources emitting
continuous, random, impulsive, and transient signals, and blind separation of signals in various non-ideal environments
is presented. This system is invisible to any anti-surveillance device since it uses the acoustic signal emitted by a target
source. It can be mounted on a robot or an unmanned vehicle to perform various covert operations, including intelligence
gathering in an open or a confined field, or to carry out the rescue mission to search people trapped inside ruins or buried
under wreckages.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The University of Hawaii has developed a concept to ruggedize an existing thermal infrared hyperspectral system for use in the NASA SIERRA UAV. The Hawaii Institute of Geophysics and Planetology has developed a suite of instruments that acquire high spectral resolution thermal infrared image data with low mass and power consumption by combining microbolometers with stationary interferometers, allowing us to achieve hyperspectral resolution (20 wavenumbers between 8 and 14 micrometers), with signal to noise ratios as high as 1500:1. Several similar instruments have been developed and flown by our research group. One recent iteration, developed under NASA EPSCoR funding and designed for inclusion on a microsatellite (Thermal Hyperspectral Imager; THI), has a mass of 11 kg. Making THI ready for deployment on the SIERRA will involve incorporating improvements made in building nine thermal interferometric hyperspectral systems for commercial and government sponsors as part of HIGP’s wider program. This includes: a) hardening the system for operation in the SIERRA environment, b) compact design for the calibration system, c) reconfiguring software for autonomous operation, d) incorporating HIGP-developed detectors with increased responsiveness at the 8 micron end of the TIR range, and e) an improved interferometer to increase SNR for imaging at the SIERRA’s air speed. UAVs provide a unique platform for science investigations that the proposed instrument, UAVTHI, will be well placed to facilitate (e.g. very high temporal resolution measurements of temporally dynamic phenomena, such as wildfires and volcanic ash clouds). Its spectral range is suited to measuring gas plumes, including sulfur dioxide and carbon dioxide, which exhibit absorption features in the 8 to 14 micron range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High resolution broad-band imagery in the visible and infrared bands provides valuable detection capabilities based on
target shapes and temperatures. However, the spectral resolution provided by a hyperspectral imager adds a spectral
dimension to the measurements, which leads to an additional means of detecting and identifying targets based on their
spectral signature.
The Telops Hyper-Cam sensor is an interferometer-based imaging system that enables the spatial and spectral analysis of
targets using a single sensor. It is based on the Fourier-transform technology, which yields high spectral resolution and
enables a high accuracy radiometric calibration. It provides datacubes of up to 320×256 pixels at spectral resolutions as
fine as 0.25 cm-1. The LWIR version covers the 8.0 to 11.8 μm spectral range. The Hyper-Cam has been recently
integrated and flown on a novel airborne gyro-stabilized platform inside a fixed-wing aircraft.
The new platform, more compact and more advanced than its predecessor, is described in this paper. The first results of
target detection and identification are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditionally, daylight and night vision imaging systems have required image intensifiers plus daytime cameras. But SRI’s new NV-CMOS™ image sensor technology is designed to capture images over the full range of illumination from bright sunlight to starlight. SRI’s NV-CMOS image sensors provide the low light sensitivity approaching that of an analog image intensifier tube with the cost, power, ruggedness, flexibility and convenience of a digital CMOS imager chip. NV-CMOS provides multi-megapixels at video frame rates with low noise (<2 h+), high sensitivity across the visible and near infrared (NIR) bands (peak QE <85%), high resolution (MTF at Nyquist < 50% @ 650 nm), and extended dynamic range (<75 dB). The latest test data from the NV-CMOS imager technology will be presented.
Unlike conventional image intensifiers, the NV-CMOS image sensor outputs a digital signal, ideal for recording or sharing video as well as fusion with thermal imagery. The result is a substantial reduction in size and weight, ideal for SWaP-constrained missions such as UAVs and mobile operations. SRI’s motion adaptive noise reduction processing further increases the sensitivity and reduces image smear. Enhancement of moving targets in imagery captured under extreme low light conditions imposes difficult challenges. SRI has demonstrated that image registration provides a robust solution for enhancing global scene contrast under very low SNR conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents a study of the capability of time- and frequency-domain algorithms for bistatic SAR processing. Two typical algorithms, Bistatic Fast Backprojection (BiFBP) and Bistatic Range Doppler (BiRDA), which are both available for general bistatic geometry, are selected as the examples of time- and frequency-domain algorithms in this study. Their capability is evaluated based on some criteria such as processing time required by the algorithms to reconstruct SAR images from bistatic SAR data and the quality assessments of those SAR images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper presents another possibility to focus moving targets using normalized relative speed (NRS). Similar to the currently used focusing approach, the focusing approach proposed in this paper aims at the ultrawideband and ultrawidebeam synthetic aperture radar systems (UWB SAR) like CARABAS-II. The proposal is shown to overcome the shortcomings of the original focusing approach and can be extended to more complicated cases, for example bistatic SAR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Phase retrieval and phase diversity are wavefront sensing techniques fed by focal-plane data. In phase retrieval, the incoming wavefront is estimated from a single (near-) focal image of an unresolved source. In phase diversity, from at least two images of the same (complex) object recorded in presence of a known optical aberration (e,g., defocus), both the unknown incoming wavefront and the observed object can be derived. These two techniques have many advantages: the hardware is limited to (or can be merged in) the usual imaging sensor, the number of estimated modes can be continuously tuned and both are among the very few methods enabling the measurement of differential pistons/tip/tilts on segmented or divided apertures. The counterpart is that complexity is reported to digital processing, which is either iterative and long, or fast but limited to a first-order phase expansion. Based on an innovative physical approach and mathematical inversion, new simple, analytical and exact algorithms have been recently derived for phase retrieval and diversity. Conjugated with recent detector and processor advances, these algorithms can be implemented in adaptive/active optics loops, or even provide a purely-digital on-the-fly alternative. In this paper, for the first time, we present experimental validation of these algorithms with the cophasing of a segmented mirror.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents experimental results obtained with Ziva Corp.’s image processing approach called Computational Imaging for Aberrated Optics (CIAO), which is a multi-image deconvolution algorithm. CIAO enhances the performance of imaging systems by accommodating wavefront error. This accommodation allows the designer to improve system performance or reduce system cost. CIAO has been successfully tested in a wide field of view imaging system, which has significant aberrations. These experimental results show CIAO restoration of high quality images from highly blurred images. Specifically, CIAO allows the pupil to open <50% beyond the diffraction limited aperture, which allows more light capture and higher cut-off resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long-range video surveillance performance is often severely diminished due to atmospheric turbulence. The larger
apertures typically used for video-rate operation at long-range are particularly susceptible to scintillation and blurring
effects that limit the overall diffraction efficiency and resolution. In this paper, we present research progress made toward
a digital signal processing technique which aims to mitigate the effects of turbulence in real-time. Our previous work in
this area focused on an embedded implementation for portable applications. Our more recent research has focused on
functional enhancements to the same algorithm using general-purpose hardware. We present some techniques that were
successfully employed to accelerate processing of high-definition color video streams and study performance under non-ideal
conditions involving moving objects and panning cameras. Finally, we compare the real-time performance of two
implementations using a CPU and a GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The potential benefits of real-time, or near-real-time, turbulent image processing hardware for long-range surveillance and weapons targeting are sufficient to motivate significant commitment of both time and money to their development. Thoughtful comparisons between potential candidates are necessary to confidently decide on a preferred processing algorithm. In this paper, we compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to longpath horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern thermal cameras acquire IR images with a high dynamic range because they have to sense with high thermal resolution the great temperature changes of monitored scenarios in specific surveillance applications. Initially developed for visible light images and recently extended for display of IR images, high dynamic range compression (HDRC) techniques aim at furnishing plain images to human operators for a first intuitive comprehension of the sensed scenario without altering the features of IR images. In this context, the maritime scenario represents a challenging case to test and develop HDRC strategies since images collected for surveillance at sea are typically characterized by high thermal gradients among the background scene and classes of objects at different temperatures. In the development of a new IRST system, Selex ES assembled a demonstrator equipped with modern thermal cameras and planned a measurement campaign on a maritime scenario so as to collect IR sequences in different operating conditions. This has led to build up a case record of situations suitable to test HDRC techniques. In this work, a survey of HDRC approaches is introduced pointing out advantages and drawbacks with focus on strategies specifically designed to display IR images. A detailed analysis of the performance is discussed in order to address the task of visualization with reference to typical issues of IR maritime images, such as robustness to the horizon effect and displaying of very warm objects and flat areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an algorithm for image geometric transformation parameters estimation which deals with multispectral
video sequences is considered. An approach of optimal choosing of reference areas that allow minimizing error, caused
by additive noise presence, is proposed. The results of experimental examination are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Georgia Tech has developed a new modeling and simulation tool that predicts both radar and electro-optical infrared (EO-IR) lateral range curves (LRCs) and sweep widths (SWs) under the Optimization of Radar and Electro-Optical Sensors (OREOS) program for US Coast Guard Search and Rescue (SAR) applications. In a search scenario when the location of the lost or overdue craft is unknown, the Coast Guard will conduct searches based upon standard procedure, personnel expertise, operational experience, and models. One metric for search planning is the sweep width, or integrated area under a LRC. Because a searching craft is equipped with radar and EO-IR sensor suites, the Coast Guard is interested in accurate predictions of sweep width for the particular search scenario. Here, we will discuss the physical models that make up the EO-IR portion of the OREOS code. First, Georgia Tech SIGnature (GTSIG) generates thermal signatures of search targets based upon the thermal and optical properties of the target and the environment; a renderer then calculates target contrast. Sensor information, atmospheric transmission, and the calculated target contrasts are input into NVESD models to generate probability of detection (PD) vs. slant range data. These PD vs. range values are then converted into LRCs by taking into account a continuous look search from a moving platform; sweep widths are then calculated. The OREOS tool differs from previous methods in that physical models are used to predict the LRCs and sweep widths at every step in the process, whereas heuristic methods were previously employed to generate final predictions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We report on the application of Optical Flow (OF) and state-of-the art multi-frame Super-Resolution (SR) algorithms to imagery that models space objects (SOs). Specifically, we demonstrate the ability to track SOs through sequences consisting of tens of images using different OF algorithms and show dependence of the tracking accuracy on illumination condition changes and on the values of pixel displacements between neighboring images. Additionally, we demonstrate spatial acuity enhancement of the pixel limited resolution of SO motion imagery by applying a novel SR algorithm accounting for OF errors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effective use of intelligence, surveillance, and reconnaissance (ISR) data gathered by unmanned aerial vehicle (UAV) missions is vital to US Military operations. The 2006 Geospatial Intelligence (GEOINT) Basic Doctrine includes the following statements:
(a) A primary purpose of geospatial products has always been to provide visualization of operational spaces and activity patterns of all sizes and scales, ranging from global and regional level to cities and even individual buildings. (b) A picture is simply the fastest way to communicate spatial information to a customer.
Parallax Visualization (PV) technologies have been introduced, which: (1) use existing UAV sensor data, (2) provide critical alignment software tools, and (3) produce autostereoscopic (automatic depth perception) ISR work products. PV work products can be distributed across military networks and viewed on standard unaided displays. Previous evaluations have established that PV of ISR full motion video (FMV) data presents three-dimensional information in an obvious and immediate manner, thus literally adding a new dimension to the basic picture goal as set out by the GEOINT doctrine.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GPS is a critical sensor for Unmanned Aircraft Systems (UASs) due to its accuracy, global coverage and small hardware footprint, but is subject to denial due to signal blockage or RF interference. When GPS is unavailable, position, velocity and attitude (PVA) performance from other inertial and air data sensors is not sufficient, especially for small UASs. Recently, image-based navigation algorithms have been developed to address GPS outages for UASs, since most of these platforms already include a camera as standard equipage. Performing absolute navigation with real-time aerial images requires georeferenced data, either images or landmarks, as a reference. Georeferenced imagery is readily available today, but requires a large amount of storage, whereas collections of discrete landmarks are compact but must be generated by pre-processing. An alternative, compact source of georeferenced data having large coverage area is open source vector maps from which meta-objects can be extracted for matching against real-time acquired imagery. We have developed a novel, automated approach called MINA (Meta Image Navigation Augmenters), which is a synergy of machine-vision and machine-learning algorithms for map aided navigation. As opposed to existing image map matching algorithms, MINA utilizes publicly available open-source geo-referenced vector map data, such as OpenStreetMap, in conjunction with real-time optical imagery from an on-board, monocular camera to augment the UAS navigation computer when GPS is not available. The MINA approach has been experimentally validated with both actual flight data and flight simulation data and results are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Counting people is a common topic in the area of visual surveillance and crowd analysis. While many image-based solutions are designed to count only a few persons at the same time, like pedestrians entering a shop or watching an advertisement, there is hardly any solution for counting large crowds of several hundred persons or more. We addressed this problem previously by designing a semi-automatic system being able to count crowds consisting of hundreds or thousands of people based on aerial images of demonstrations or similar events. This system requires major user interaction to segment the image. Our principle aim is to reduce this manual interaction. To achieve this, we propose a new and automatic system. Besides counting the people in large crowds, the system yields the positions of people allowing a plausibility check by a human operator. In order to automatize the people counting system, we use crowd density estimation. The determination of crowd density is based on several features like edge intensity or spatial frequency. They indicate the density and discriminate between a crowd and other image regions like buildings, bushes or trees. We compare the performance of our automatic system to the previous semi-automatic system and to manual counting in images. By counting a test set of aerial images showing large crowds containing up to 12,000 people, the performance gain of our new system will be measured. By improving our previous system, we will increase the benefit of an image-based solution for counting people in large crowds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The small unmanned Aerial Vehicles (UAVs) are more popular because of their lower flight height, shorter flight period and continuous affordability. However, the small UAVs are sensitive to wind and airstream during the flight. The videos are often characterized by jitter, so the effective image electronic stabilization is important. In this paper, firstly, the flight characteristics of small UAVs were summarized and analyzed. Secondly, we analyzed the following problems: 1) under condition of drifts, the intentional motion estimation is not easy and much information will lost if motion compensation is not conducted properly; 2) on the situation of large tilt angle, the motions of images are complicate, simple motion models are not suitable. In order to cope with these problems, corresponding algorithms were proposed. Finally, we conducted some experiments; the results indicated that our methods were effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The airborne video streams of small-UAVs are commonly plagued with distractive jittery and shaking motions, disorienting rotations, noisy and distorted images and other unwanted movements. These problems collectively make it very difficult for observers to obtain useful information from the video. Due to the small payload of small-UAVs, it is a priority to improve the image quality by means of electronic image stabilization. But when small-UAV makes a turn, affected by the flight characteristics of it, the video is easy to become oblique. This brings a lot of difficulties to electronic image stabilization technology. Homography model performed well in the oblique image motion estimation, while bringing great challenges to intentional motion estimation. Therefore, in this paper, we focus on solve the problem of the video stabilized when small-UAVs banking and turning. We attend to the small-UAVs fly along with an arc of a fixed turning radius. For this reason, after a series of experimental analysis on the flight characteristics and the path how small-UAVs turned, we presented a new method to estimate the intentional motion in which the path of the frame center was used to fit the video moving track. Meanwhile, the image sequences dynamic mosaic was done to make up for the limited field of view. At last, the proposed algorithm was carried out and validated by actual airborne videos. The results show that the proposed method is effective to stabilize the oblique video of small-UAVs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measured indicators such as resolution, blur, noise and artifact estimates are used to predict video interpretability. The indicators show the effect of compression, lost packets, and enhancements. The indicators and metadata-derived resolution can also be used to select appropriate algorithms for further enhancement or exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous methods exist for quantifying the information potential of imagery exploited by a human observer. The National Imagery Interpretability Ratings Scale (NIIRS) is a useful standard for intelligence, surveillance, and reconnaissance (ISR) applications. Extensions of this approach to motion imagery provide an understanding of the factors affecting interpretability of video data. More recent investigations have shown, however, that human observers and automated processing methods are sensitive to different aspects of image quality. This paper extends earlier research to present a model for quantifying the quality of motion imagery in the context of automated exploitation. In particular, we present a method for predicting the tracker performance and demonstrate the results on a range of video clips. Automated methods for assessing video quality can provide valuable feedback for collection management and guide the exploitation and analysis of the imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two of the biggest challenges in designing U×V vision systems are properly representing high dynamic range scene content using low dynamic range components and reducing camera motion blur. SRI’s MASI-HDR (Motion Adaptive Signal Integration-High Dynamic Range) is a novel technique for generating blur-reduced video using multiple captures for each displayed frame while increasing the effective camera dynamic range by four bits or more. MASI-HDR processing thus provides high performance video from rapidly moving platforms in real-world conditions in low latency real time, enabling even the most demanding applications on air, ground and water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking targets in video surveillance with the possibility of moving the camera to keep the target within the field of view is an important task for security personnel working in sensitive sites.
This work presents a real-time 3D tracking system based on stereovision. The camera system is positioned on a Pan and Tilt platform in order to continuously track a detected target. Particle filters are used for tracking and a pattern recognition approach is performed in order to keep the focus on the target of interest. The 3D position of the target relative to the stereovision frame is computed using stereovision techniques. This computed position gives the possibility of following the target position in a georeferenced site map in real-time.
Tests conducted in outdoor scenarios show the efficiency of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Office of Naval Research (ONR) is looking for methods to perform higher levels of sensor processing onboard UAVs to alleviate the need to transmit full motion video to ground stations over constrained data links. Charles River Analytics is particularly interested in performing intelligence, surveillance, and reconnaissance (ISR) tasks using UAV sensor feeds. Computing with approximate arithmetic can provide 10,000x improvement in size, weight, and power (SWAP) over desktop CPUs, thereby enabling ISR processing onboard small UAVs. Charles River and Singular Computing are teaming on an ONR program to develop these low-SWAP ISR capabilities using a small, low power, single chip machine, developed by Singular Computing, with many thousands of cores. Producing reliable results efficiently on massively parallel approximate machines requires adapting the core kernels of algorithms. We describe a feature-aided tracking algorithm adapted for the novel hardware architecture, which will be suitable for use onboard a UAV. Tests have shown the algorithm produces results equivalent to state-of-the-art traditional approaches while achieving a 6400x improvement in speed/power ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unmanned surveillance platforms have a ubiquitous presence in surveillance and reconnaissance operations. As the resolution and fidelity of the video sensors on these platforms increases, so does the bandwidth required to provide the data to the analyst and the subsequent analyst workload to interpret it. This leads to an increasing need to perform video processing on-board the sensor platform, thus transmitting only critical information to the analysts, reducing both the data bandwidth requirements and analyst workload.
In this paper, we present a system for object recognition in video that employs embedded hardware and CPUs that can be implemented onboard an autonomous platform to provide real-time information extraction. Called NEOVUS (NEurOmorphic Understanding of Scenes), our system draws inspiration from models of mammalian visual processing and is implemented in state-of-the-art COTS hardware to achieve low size, weight and power, while maintaining realtime processing at reasonable cost. We use visual attention methods for detection of stationary and moving objects from a moving platform based in motion and form, and employ multi-scale convolutional neural networks for classification, which has been mapped to FPGA hardware. Evaluation of our system has shown that we can achieve real-time speeds of thirty frames per second with up to five-megapixel resolution videos. Our system shows three to four orders of magnitude in power reduction compared to state of the art computer vision algorithms while reducing the communications bandwidth required for evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real-world target tracking scenarios, interactions among multiple moving targets can severely compromise the performance of the tracking system. Targets involved in interactions are typically closely-spaced and are often partially or entirely occluded by other objects. In these cases, valid target observations are unlikely to be available. To address this issue, we present an integrated multi-target tracking system. The data association method evaluates the overlap rates between newly detected objects (target observations) and already-tracked targets, and makes decisions pertaining to whether a target is interacting with other targets and whether it has a valid observation. Thus, the system is capable of recognizing target interactions and will reject invalid target observations. According to the association results, distinct strategies are adopted to update and manage the tracks of interacting versus well-isolated targets. Testing results on real-world airborne video sequences demonstrate the excellent performance of the proposed system for tracking targets with multiple target interactions. Moreover, the system operates in real time on an ordinary desktop computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic object detection and tracking has been widely applied in the video surveillance systems for homeland security and data fusion in the remote sensing and airborne imagery. The typical applications include human motion analysis, vehicle detection, and architectural building detection. Here we conduct object detection and tracking under planar constraints for interesting objects. Planar surface abounds in man-made environment. It provides much useful information for image understanding and then can be adopted to improve the performance of object detection and tracking. The experiments on real data show that object detection and tracking could be successfully implemented by incorporating planar information of interesting objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a real-time embedded video target tracking algorithm for use with real-world airborne video. The proposed system is designed to detect and track multiple targets from a moving camera in complicated motion scenarios such as occlusion, closely spaced targets passing in opposite directions, move-stop-move, etc. In our previous work, we developed a robust motion-based detection and tracking system, which achieved real-time performance on a desktop computer. In this paper, we extend our work to real-time implementation on a Texas Instruments OMAP 3730 ARM + DSP embedded processor by replacing the previous sequential motion estimation and tracking processes with a parallel implementation. To achieve real-time performance on the heterogeneous-core ARM + DSP OMAP platform, the C64x+ DSP core is utilized as a motion estimation preprocessing unit for target detection. Following the DSP-based motion estimation step, the descriptors of potential targets are passed to the general-purpose ARM Cortex A8 for further processing. Simultaneously, the DSP begins preprocessing the next frame. By maximizing the parallel computational capability of the DSP, and operating the DSP and ARM asynchronously, we reduce the average processing time for each video frame by up to 60% as compared to an ARM-only approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual surveillance systems provide real time monitoring of the events or the environment. The availability of low cost sensors and processors has increased the number of possible applications of these kinds of systems. However, designing an optimized visual surveillance system for a given application is a challenging task, which often becomes a unique design task for each system. Moreover, the choice of components for a given surveillance application out of a wide spectrum of available alternatives is not an easy job. In this paper, we propose to use a general surveillance taxonomy as a base to structure the analysis and development of surveillance systems. We demonstrate the proposed taxonomy for designing a volumetric surveillance system for monitoring the movement of eagles in wind parks aiming to avoid their collision with wind mills. The analysis of the problem is performed based on taxonomy and behavioral and implementation models are identified to formulate the solution space for the problem. Moreover, we show that there is a need for generalized volumetric optimization methods for camera deployment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.