PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620901 (2006) https://doi.org/10.1117/12.659601
In this paper, we introduce Genex's innovative multiple target tracking system (i.e. SmartMTI algorithm and our miniature DSP/FPGA data processing hardware). SmartMTI is designed for intelligent surveillance on moving platforms such as UAVs (unmmaned Aerial Vehicle), UGV (unmanned ground vehicle), and manned moving platforms. It uses our state-machine MTI framework to seamlessly integrate our state-of-the-art motion detection and target tracking methods to create multiple target following and inter-object 'awareness', thus allowing the system to robustly handle difficult situations such as targets under merging, occlusion, and disappearing conditions. Preliminary tests show that, once implemented on our miniaturized DSP/FPGA hardware, our system can detect and track multiple targets in real time with extremely low miss-detection rate. The SmartMTI design effort leverages Genex's expertise and experience in real-time surveillance system design for the Army's AMCOM's SCORPION or "Glide Bomb" program, NUWC's CERBERUS program, BMDO's missile seeker program, Air Force's UAV auto-navigation and surveillance program, and DARPA's Future Combat System (FCS) program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620902 (2006) https://doi.org/10.1117/12.665122
This paper presents a method for localizing noise-corrupted areas in quality degraded video frames, and for
reducing the additive noise by utilizing the temporal redundancy in the video sequence. In the proposed algorithm,
the local variance of each pixel is computed to obtain the spatial distribution of noise. After adaptive
thresholding, region clustering, and merging, the corrupted areas of highest energy are detected. Due to the high
temporal redundancy in the video sequence, the corrupted information can be compensated by overlapping the
corrupted regions with the appropriate regions from adjacent video frames. The corresponding pixel locations
in the adjacent frames are computed by using image registration and warping techniques. New pixel values
are calculated based upon multi-frame stacking. Pixel values in the adjacent frames are weighted according to
registration errors, whereas the values in the noisy frame are evaluated according to local variance. Knowing
the location of the local noise enables the denoising process to be much more specific and accurate. Moreover,
since only a portion of the frame is processed, as compared to standard denoising methods that operate on the
entire frame, the details and features in other areas of the frame are preserved. The proposed scheme is applied
to UAV video sequences, where the outstanding noise localization and reduction properties are demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620903 (2006) https://doi.org/10.1117/12.665197
The motion imagery community would benefit from the availability of standard measures for assessing image interpretability. The National Imagery Interpretability Rating Scale (NIIRS) has served as a community standard for still imagery, but no comparable scale exists for motion imagery. Several considerations unique to motion imagery indicate that the standard methodology employed in the past for NIIRS development may not be applicable or, at a minimum, requires modifications. The dynamic nature of motion imagery introduces a number of factors that do not affect the perceived interpretability of still imagery - namely target motion and camera motion. A set of studies sponsored by the National Geospatial-Intelligence Agency (NGA) have been conducted to understand and quantify the effects of critical factors. This study discusses the development and validation of a methodology that has been proposed for the development of a NIIRS-like scale for motion imagery. The methodology adapts the standard NIIRS development procedures to the softcopy exploitation environment and focuses on image interpretation tasks that target the dynamic nature of motion imagery. This paper describes the proposed methodology, presents the findings from a methodology assessment evaluation, and offers recommendations for the full development of a scale for motion imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620905 (2006) https://doi.org/10.1117/12.665310
This paper describes experimental results from a live-fire data collect designed to demonstrate the ability of IR and acoustic sensing systems to detect and map high-volume gunfire events from tactical UAVs. The data collect supports an exploratory study of the FightSight concept in which an autonomous UAV-based sensor exploitation and decision support capability is being proposed to provide dynamic situational awareness for large-scale battalion-level firefights in cluttered urban environments. FightSight integrates IR imagery, acoustic data, and 3D scene context data with prior time information in a multi-level, multi-step probabilistic-based fusion process to reliably locate and map the array of urban firing events and firepower movements and trends associated with the evolving urban battlefield situation. Described here are sensor results from live-fire experiments involving simultaneous firing of multiple sub/super-sonic weapons (2-AK47, 2-M16, 1 Beretta, 1 Mortar, 1 rocket) with high optical and acoustic clutter at ranges up to 400m. Sensor-shooter-target configurations and clutter were designed to simulate UAV sensing conditions for a high-intensity firefight in an urban environment. Sensor systems evaluated were an IR bullet tracking system by Lawrence Livermore National Laboratory (LLNL) and an acoustic gunshot detection system by Planning Systems, Inc. (PSI). The results demonstrate convincingly the ability for the LLNL and PSI sensor systems to accurately detect, separate, and localize multiple shooters and the associated shot directions during a high-intensity firefight (77 rounds in 5 sec) in a high acoustic and optical clutter environment with very low false alarms. Preliminary fusion processing was also examined that demonstrated an ability to distinguish co-located shooters (shooter density), range to <0.5 m accuracy at 400m, and weapon type. The combined results of the high-intensity firefight data collect and a detailed systems study demonstrate the readiness of the FightSight concept for full system development and integration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620906 (2006) https://doi.org/10.1117/12.665797
This paper discusses data storage requirements for data acquisition systems, and evaluates the ability of three of the most popular COTS data storage solutions - mechanical disk, ruggedized mechanical disk and solid-state flash disk - to meet these requirements today and in the future. It addresses issues of capacity, data reliability, endurance, form factor, cost and security features. It concludes with a discussion of trends to implement high-speed serial interfaces in data acquisition systems, and the challenges that they pose for COTS storage solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620907 (2006) https://doi.org/10.1117/12.665880
This paper presents a system for creating a mosaic image from a sequence of images with moving objects present in the scene. This system first uses SIFT-based image registration on the entire image to obtain the initial global projection matrix. After image segmentation, the global motion model is applied to each region for evaluation. The transformation matrix is refined for best projection in each region, and a more precise global transformation matrix is calculated based upon local projections on majority of coherent regions. As a consequence this method is robust to disturbances to the projection model induced by moving objects and motion parallax. In the image blending stage, pixels in coherent regions are weighted by their distances from the overlapping edges to achieve a seamless panorama, while heterogeneous regions are cut and pasted to avoid ghosting or blurring. The most recent information regarding location, shape, and size of the moving foreground objects is therefore reflected in the panorama. Constructed mosaics are presented to demonstrate the performance and robustness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620908 (2006) https://doi.org/10.1117/12.665909
Cheney and Borden [1] and Cheney and Nolan [2] have proposed that target identification may be achieved
by an analysis of the microlocal structure of their ISAR images. To implement their idea, a Radon
transform approach was used [3]. Noise is a problem for the Radon transform and consequently a more
robust method against noise is preferable. Candes and Donohoe have investigated the use of the Curvelet
transform for Radon data with noise [4] and have shown it to be superior to traditional methods. In this
paper, we use simulated ISAR data to investigate the ability of the Curvelet transform to recognize different
types of scattering elements in a low signal-to-noise environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 620909 (2006) https://doi.org/10.1117/12.666215
Goodrich's DB-110 Reconnaissance Airborne Pod for TORnado (RAPTOR) and Data Link Ground Station (DLGS) have been used operationally for several years by the Royal Air Force (RAF). A variant of the RAPTOR DB-110 Sensor System is currently being used by the Japan Maritime Self Defense Force (JMSDF). Recently, the DB-110 system was flown on the Predator B Unmanned Aerial Vehicle (UAV), demonstrating the DB-110 system's utility on unmanned reconnaissance aircraft. The DB-110 is a dual-band EO and IR imaging capability for long, medium, and short standoff ranges, including oblique and over-flight imaging, in a single sensor package. The DB-110 system has also proven performance for real-time high bandwidth data link imagery transmission. Goodrich has leveraged this operational experience in building a 3rd Generation DB-110 system including new Reconnaissance Airborne Pod and Ground System, to be first used by the Polish Air Force. This 3rd Generation system maintains all the capability of the current 2nd Generation DB-110 system and adds several new features. The 3rd Generation system upgrades include an increase in resolution via new focal planes, addition of a third ("super-wide") field of view, and new avionics. This paper summarizes the Goodrich DB-110 3rd Generation System in terms of its basic design and capabilities. Recent demonstration of the DB-110 on the Predator B UAV is overviewed including sample imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090A (2006) https://doi.org/10.1117/12.666354
The benefits of remote sensed imagery cannot be fully exploited without displaying the images in real-time. This paper
describes a viewer system that displays orthorectified images in several modes and will allow accurate geo-location
information, change or motion detection and offers situation awareness in real-time. And, through the use of a real-time
viewer the image acquisition platform can be interactively redirected to focus on objects or location of interest to
improve overall reconnaissance efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090B (2006) https://doi.org/10.1117/12.666519
Airborne surveillance and targeting sensors are capable of generating large quantities of imagery, making it difficult for the user to find the targets of interest. Automatic target identification (ATI) can assist this process by searching for target-like objects and classifying them, thus reducing workload. ATI algorithms, developed in the laboratory by QinetiQ, have been implemented in real-time on ruggedised processing capable of flight. A series of airborne tests has been carried out to assess the performance of the ATI under real world conditions, using a Wescam EO/IR turret as the source of imagery. The tests included examples of military vehicles in urban and rural scenarios, with varying degrees of hide and concealment. Tests were conducted in different weather conditions to assess the robustness of the sensor and ATI combination. This paper discusses the tests carried out and the performance of the ATI achieved as a function of the test parameters. Conclusions are drawn as to the current state of ATI and its applicability to military requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090C (2006) https://doi.org/10.1117/12.666727
Sandia-developed SAR systems are well known for their real-time, high quality, high resolution imagery. One such system, the General Atomics Lynx radar, has been successfully demonstrated on medium-payload UAVs, including the Predator and Fire Scout. Previously, Sandia reported on its system concept and roadmap for SAR miniaturization, including details of the miniSAR program. This paper and its companions provide an update for miniSAR and discuss the results of the successful May 2005 demonstration of the 26 pound, 4-inch resolution system. Accordingly, the miniSAR system and software implementation and performance are reviewed. Additionally, future plans for miniSAR and the Sandia SAR/GMTI miniaturization efforts are discussed, such as the currently planned miniSAR demonstration onboard a small-payload UAV.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090D (2006) https://doi.org/10.1117/12.667266
Unmanned Aerial Vehicles (UAVs) are becoming a core intelligence asset for reconnaissance, surveillance and target tracking in urban and battlefield settings. In order to achieve the goal of automated tracking of objects in UAV videos we have developed a system called COCOA. It processes the video stream through number of stages. At first stage platform motion compensation is performed. Moving object detection is performed to detect the regions of interest from which object contours are extracted by performing a level set based segmentation. Finally blob based tracking is performed for each detected object. Global tracks are generated which are used for higher level processing. COCOA is customizable to different sensor resolutions and is capable of tracking targets as small as 100 pixels. It works seamlessly for both visible and thermal imaging modes. The system is implemented in Matlab and works in a batch mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090E (2006) https://doi.org/10.1117/12.667527
We analyze challenges in the current approaches to digital video surveillance solutions, both technically and financially.
We propose a Cell Processor based digital video surveillance platform to overcome those challenges and address ever
growing needs in enterprise class surveillance solutions capable of addressing multiple thousands camera installations.
To improve the compression efficiency we have chosen H.264 video compression algorithm which outperforms all
standard video compression schemes as of today.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090F (2006) https://doi.org/10.1117/12.667704
Video cameras have increased in usefulness in military applications over the past four decades. This is a result of many advances in technology and because no one portion of the spectrum reigns supreme under all environmental and operating conditions. The visible portion of the spectrum has the clear advantage of ease of information interpretation, requiring little or no training. This advantage extends into the Near IR (NIR) spectral region to silicon cutoff with little difficulty. Inclusion of the NIR region is of particular importance due to the rich photon content of natural night illumination. The addition of color capability offers another dimension to target/situation discrimination and hence is highly desirable. A military camera must be small, lightweight and low power. Limiting resolution and sensitivity cannot be sacrificed to achieve color capability. Newly developed electron-multiplication CCD sensors (EMCCDs) open the door to a practical low-light/all-light color camera without an image intensifier. Ball Aerospace & Technologies Corp (BATC) has developed a unique color camera that allows the addition of color with a very small impact on low light level performance and negligible impact on limiting resolution. The approach, which includes the NIR portion of the spectrum along with the visible, requires no moving parts and is based on the addition of a sparse sampling color filter to the surface of an EMCCD. It renders the correct hue in a real time, video rate image with negligible latency. Furthermore, camera size and power impact is slight.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090H (2006) https://doi.org/10.1117/12.668385
Raytheon's AN/ASQ-228 Advanced Targeting Forward-Looking Infrared (ATFLIR) Pod features state-of-the-art mid-wave infrared targeting and navigation FLIRs, an electro-optical sensor, a laser rangefinder and target designator, and a laser spot tracker. ATFLIR is fully integrated and flight tested on all F/A-18 Hornet/Super Hornet models, approved for full-rate production and is forward deployed, supporting US. fleet operations worldwide. This paper will present ATFLIR status and a summary of future plans.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090I (2006) https://doi.org/10.1117/12.668782
Linear features from airport images correspond to runways, taxiways and roads. Detecting runways helps pilots to focus on runway incursions in poor visibility conditions. In this work, we attempt to detect linear features from LiDAR swath in near real time using parallel implementation on G5-based apple cluster called Xseed. Data from LiDAR swath is converted into a uniform grid with nearest neighbor interpolation. The edges and gradient directions are computed using standard edge detection algorithms such as Canny's detector. Edge linking and detecting straight-line features are described. Preliminary results on Reno, Nevada airport data are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Airborne Intelligence, Surveillance, Reconnaissance (ISR) Systems and Applications III, 62090K (2006) https://doi.org/10.1117/12.673122
Automated aerial surveillance and detection of hostile ground events, and the tracking of the perpetrators have become of critical importance in the prevention and control of insurgent uprisings and the global war on terror. Yet a basic understanding of the limitations of sensor system coverage as a function of aerial platform position and attitude is often unavailable to program managers and system administrators.
In an effort to better understand this problem we present some of the design tradeoffs for two applications: 1) a 360° viewing focal-plane array sensor system modeled for low altitude aerostat applications, and 2) a fixed diameter area of constant surveillance modeled for high altitude fixed wing aircraft applications. Ground coverage requirement tradeoffs include the number of sensors, sensor footprint geometry, footprint coverage variability as a function of platform position and attitude, and ground surface modeling. Event location specification includes latitude, longitude, altitude for the pixel centroid and corners, and line-of-sight centroid range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.