PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The Belgian Air Force successfully carried out flight trials of the latest Low Light CCD focal plane technology during December of 2003. Simultaneous imaging of the ground was performed by conventional CCD, Infra Red Linescan and Low Light CCD reconnaissance sensors; provided and integrated by Thales within the Modular Reconnaissance Pod (MRP). This paper reports on the results and compares capability of the technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A second-generation passive millimeter-wave imaging system is being prepared for flight testing on a UH-1H “Huey” helicopter platform. Passive millimeter-wave sensors form images through collection of blackbody emissions in the millimeter-wave portion of the electromagnetic spectrum. Radiation at this wavelength is not significantly scattered or absorbed by fog, clouds, smoke, or fine dust, which may blind other electro-optic sensors. Additionally, millimeter-wave imagery depends on a phenomenology based on reflection rather than emission, which produces a high level of contrast for metal targets. The system to be flight tested operates in the W-band region of the spectrum at a 30 Hz frame rate. The field-of-view of the system is 20 x 30 degrees and the system temperature resolution is less than 3 degrees. The system uses a pupil-plane phased-array architecture that allows the large aperture system to retain a compact form factor appropriate for airborne applications. The flight test is to include demonstrations of navigation with the system in a look-forward mode, targeting and reconnaissance with the system in a look down mode at 45 degrees, and landing aid with the system looking straight down. Potential targets include military and non-military vehicles, roads, rivers, and other landmarks, and terrain features. The flight test is scheduled to be completed in April 2004 with images available soon thereafter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The U.S. Navy’s SHAred Reconnaissance Pod (SHARP) employs the Recon/Optical, Inc. (ROI) CA-279 dual spectral band (visible/IR) digital cameras operating from an F-18E/F aircraft to perform low-to-high altitude reconnaissance missions. SHARP has proven itself combat worthy, with a rapid transition from development to operational deployment culminating in a highly reliable and effective reconnaissance capability for joint forces operating in Operation Iraqi Freedom (OIF). The U.S. Navy’s intelligence, surveillance and reconnaissance (ISR) roadmap transforms the SHARP system from being solely an independent reconnaissance sensor to a node in the growing Joint ISR network. ROI and the U.S. Navy have combined their resources to ensure the system’s transformation continues to follow the ISR road map. Pre-planned product improvements (P3I) for the CA-270 camera systems will lead the way in that transformation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geolocation error in aerial imagery can arise from many sources. This paper catalogs the major sources and shows how residual error may be reduced still further through the use of ground control points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SpecTIR Corporation has constructed a second copy of their HyperSpecTIR (HST) instrument, with modifications made to various mechanical, electrical, and optical systems. The first instrument (HST1) has been operating for several years aboard multiple platforms, and a sizable archive of imagery has been generated. Using this archive as a baseline, HST2 data have been evaluated to measure expected performance gains versus actual gains. The basic instrument specifications remain unchanged: 227 unique spectral channels from 450 - 2450nm with 8-12nm FWHM, 1 milliradian IFOV, 256 element cross-track scanning, up to 14 bit digitization, and beam steering optics for image stabilization. Notable changes in HST2 include AR coating of the SWIR FPA, miniaturization of the electronics, and integration of control and data processing computers within the sensor so that it may be used in a pod or UAV. Sufficiently clear data over a single study area does not exist, so data from the spectrally similar areas of Cuprite and Goldfield, Nevada are used to compare the performance of the two instruments. While AR coating of the SWIR focal plane and other improvements to HST2 have improved signal-to-noise performance, these gains are traded off for a shorter integration time allowing for faster and a greater volume of data collection. An attempt to objectively measure spectral image data quality using spectral similarity values and determining the inherent dimensionality of the data reveal similar spectral performance of the instruments under present operational modes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses conceptual ideas and simple experiments with prototypes of an aerial tethered robot carried by a hovering platform with a long cable. The robot includes a gravity stabilized sensing head and can host a cluster of robotic agents which are deployed very near to the ground target without exposing the host platform to risk.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a framework for image processing and sensor
management for an autonomous unmanned airborne surveillance system
equipped with infrared and video sensors. Our working hypothesis
is that integration of the detection-tracking-classification chain
with spatial awareness makes possible intelligent autonomous data
acquisition by means of active sensor control. A central part of
the framework is a surveillance scene representation, suitable for
target tracking, geolocation, and sensor data fusion involving
multiple platforms. The representation, based on Simultaneous
Localization and Mapping, SLAM, take into account uncertainties
associated with sensor data, platform navigation, and prior
knowledge. A client/server approach, for on-line adaptable
surveillance missions, is introduced. The presented system is
designed to simultaneously and autonomously perform the following
tasks: provide wide area coverage from multiple viewpoints by
means of a step-stare procedure, detect and track multiple
stationary and moving ground targets, perform a detailed analysis
of detected regions-of-interest, and generate precise target
coordinates by means of multi-view geolocation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rugged, reliable and secure data storage is of major concern to engineers designing Airborne Intelligence, Surveillance, Reconnaissance (ISR) systems. Operation under harsh conditions of shock, vibration and high altitude in extreme temperature ranges is an essential requirement. Due to its rotating mechanism, a mechanical disk cannot provide top-level data reliability in such an environment. Other Commercial Off The Shelf (COTS) solutions can replace mechanical disks. This paper describes some of these alternatives and highlights their cost-effectiveness. One alternative is a ruggedized mechanical disk, a mechanical disk sealed in a rigid cartridge. Another alternative is the solid-state flash disk that, like the mechanical disk, retains data when power is off, and is a “drop-in replacement” for the mechanical disk for reliability in harsh environments. Flash disk manufacturers incorporate special methods to enhance endurance and reliability for military designers.
This paper also describes the mandatory requirements of the DoD, NSA and the US Air Force for erasing confidential data in storage devices. This issue is vital, especially since the P-3 landed forcibly on China’s soil. Some flash disk vendors meet security requirements with Fast Secure Erase and Sanitize, enabling data declassification in seconds, even without power.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sandia’s fielded and experimental SAR systems are well known for their real time, high resolution imagery. Previous designs, such as the Lynx radar, have been successfully demonstrated on medium-payload UAVs, including Predator and Fire Scout. However, fielding a high performance SAR sensor on even smaller (sub-50 pound payload) UAVs will require at least a 5x reduction in size, weight, and cost. This paper gives an overview of Sandia’s system concept and roadmap for near-term SAR miniaturization. Specifically, the “miniSAR” program, which plans to demonstrate a 25 pound system with 4 inch resolution in early 2005, is detailed. Accordingly, the conceptual approach, current status, design tradeoffs, and key facilitating technologies are reviewed. Lastly, future enhancements and directions are described, such as the follow-on demonstration of a sub-20 pound version with multi-mode (SAR/GMTI) capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A series of low cost, light weight, mission-adaptable multispectral imaging spectrometers have been developed by PAR Government Systems Corporation (PGSC), utilizing mass-produced commercial off-the-shelf (COTS) components. The developed MANTIS sensors have been used to collect continuous multispectral data for mine counter measures (MCM) and intelligence, surveillance, and reconnaissance (ISR) applications aboard low cost manned aircraft platforms. Each MANTIS system images four spectral bands simultaneously. The four user-selectable spectral filters are inserted into an easily accessible filter cartridge supporting pre-flight filter selection. Data acquisition is accomplished by COTS frame grabbers installed in a Pentium based personal computer and all digitized data is written in real-time to a redundant array of independent disks (RAID). PGSC has also developed a graphical user interface providing control, display and recording options. The MANTIS approach and simple design lends itself to low-cost modifications and improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-resolution surveillance imaging with apertures greater than a few inches over horizontal or slant paths at optical or infrared wavelengths will typically be limited by atmospheric aberrations. With static targets and static platforms, we have previously demonstrated near-diffraction limited imaging of various targets including personnel and vehicles over horizontal and slant paths ranging from less than a kilometer to many tens of kilometers using adaptations to bispectral speckle imaging techniques. Nominally, these image processing methods require the target to be static with respect to its background during the data acquisition since multiple frames are required. To obtain a sufficient number of frames and also to allow the atmosphere to decorrelate between frames, data acquisition times on the order of one second are needed. Modifications to the original imaging algorithm will be needed to deal with situations where there is relative target to background motion. In this paper, we present an extension of these imaging techniques to accommodate mobile platforms and moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many years the demand to record both instrumentation and reconnaissance data was satisfied by high-end ruggedized digital tape recorders, notably the Ampex DCRsi. In recent years other technologies such as solid state and disk have entered the market. These technologies overcome the sequential access limitation of tape (albeit at a significantly higher data storage cost) which could be a benefit depending on the application and implementation. This paper describes the key differences between instrumentation and reconnaissance (imagery) recording and shows: • That instrumentation recording is inherently a sequential process itself,• That current disk and solid state recorders are yet limited by what is in effect a sequential interface.• That imagery recording could benefit substantially from random access, but only after enhancing the interface, and • That an image logger (herein defined) provide a superior method for recording imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NRL Optical Sciences Division has initiated a multi-year effort to develop and demonstrate an airborne net-centric suite of multi-intelligence (multi-INT) sensors and exploitation systems for real-time target detection and targeting product dissemination. The goal of this Net-centric Multi-Intelligence Fusion Targeting Initiative (NCMIFTI) is to develop an airborne real-time intelligence gathering and targeting system that can be used to detect concealed, camouflaged, and mobile targets. The multi-INT sensor suite will include high-resolution visible/infrared (EO/IR) dual-band cameras, hyperspectral imaging (HSI) sensors in the visible-to-near infrared, short-wave and long-wave infrared (VNIR/SWIR/LWIR) bands, Synthetic Aperture Radar (SAR), electronics intelligence sensors (ELINT), and off-board networked sensors. Other sensors are also being considered for inclusion in the suite to address unique target detection needs. Integrating a suite of multi-INT sensors on a single platform should optimize real-time fusion of the on-board sensor streams, thereby improving the detection probability and reducing the false alarms that occur in reconnaissance systems that use single-sensor types on separate platforms, or that use independent target detection algorithms on multiple sensors. In addition to the integration and fusion of the multi-INT sensors, the effort is establishing an open-systems net-centric architecture that will provide a modular “plug and play” capability for additional sensors and system components and provide distributed connectivity to multiple sites for remote system control and exploitation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the number of MSI/HSI data producers increase and the exploitation of this imagery matures, more users will request MSI/HSI data and products derived from MSI/HSI data. This paper presents client-server architecture concepts for the storage, processing, and delivery of MSI/HSI data and derived products in client-server architecture. A key component of this concept is the JPEG 2000 compression standard. JPEG 2000 is the first compression standard that is capable of preserving radiometric accuracy when compressing MSI/HSI data. JPEG 2000 enables client-server delivery of large data sets in which a client may select spatial and spectral regions of interest at a desired resolution and quality to facilitate rapid viewing of data. Using these attributes of JPEG 2000, we present concepts that facilitate thin-client server-side processing as well as traditional thick-client processing of MSI/HSI data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
“Pan” or single broadband sharpening of multispectral (low spatial resolution) imagery is currently deployed on airborne and satellite systems. The challenges for spatial sharpening of hyperspectral imagery are the focus of the current study, which utilizes high spatial resolution, geo-referenced multispectral imagery available from the QuickBird satellite with low spatial resolution AVIRIS hyperspectral imagery. Performance analysis of a spectral normalization method known as the CN Spectral Sharpening (CNSS) enables correction for the mismatch in spectral radiance levels of the two input images due to differences of sensor platform altitude, date of imaging, atmospheric path and solar irradiance conditions. The BAE Systems Spectral Similarity Scale is utilized to optimize the spectral match between the unsharpened input and of selected regions of interest, combined with computing the spectral correlation difference matrix between the unsharpened input and sharpened output. Performance evaluation includes comparison of the histogram spectral means and standard deviations of selected regions of interest, combined with computing the spectral correlation difference matrix between the unsharpened and sharpened AVIRIS data. Significantly similarity is demonstrated with high spectral correlation, yet high variance change between the green and red MSI channels results in a discontinuity region of the corresponding HSI bands. Future systems incorporating collocated high spatial resolution MSI with lower resolution HSI will enable automated spatial sharpening with improved spectral accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the constructs for a transformational paradigm within a standards-based architectural framework, which enables extremely quick and accurate visualization of large imagery sets directly from airborne intelligence and surveillance collection assets. The architecture we present handles the dissemination and “on-demand” visualization of JPEG2000 encoded geospatial imagery while providing dramatic improvements in reconnaissance and surveillance operations where low-latency access and time-critical visualization of targets are of substantial importance. This innovative framework, known as the “advanced wavelet architecture” (AWA), has been developed using open standards and nonproprietary formats, within the Commercial and Government Systems Division of Eastman Kodak Company. Numerous software and hardware applications have been developed as a result of the AWA research and development activities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Significant progress toward the development of a video annotation capability is presented in this paper. Research and development of an object tracking algorithm applicable for UAV video is described. Object tracking is necessary for attaching the annotations to the objects of interest. A methodology and format is defined for encoding video annotations using the SMPTE Key-Length-Value encoding standard. This provides the following benefits: a non-destructive annotation, compliance with existing standards, video playback in systems that are not annotation enabled and support for a real-time implementation. A model real-time video annotation system is also presented, at a high level, using the MPEG-2 Transport Stream as the transmission medium. This work was accomplished to meet the Department of Defense’s (DoD’s) need for a video annotation capability. Current practices for creating annotated products are to capture a still image frame, annotate it using an Electric Light Table application, and then pass the annotated image on as a product. That is not adequate for reporting or downstream cueing. It is too slow and there is a severe loss of information. This paper describes a capability for annotating directly on the video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Exploitation Multi I/O Modular Architecture (EMMA), a Multi I/O, Scaleable and Modular Image exploitation system as deployed in operations with the Royal Air Force is reviewed in its role as the Imagery Intelligence Ground Station to support the Reconnaissance Airborne Pod for Tornado (RAPTOR). The challenges faced by the system during operational deployment are discussed along with identified challenges for future enhanced and derivative systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The primary objective of this effort is to develop a low-cost, self-powered, and compact laser event recorder and warning sensor for the measurement of laser events. The target requirements are to measure the wavelength, irradiance, pulse length, pulse repetition frequency, duration and scenery image for each event and save the information in a time and location stamped downloadable file. The sensor design is based on a diffraction grating, low-cost optics, CCD array technology, photodiodes, integral global positioning sensor, and signal processing electronics. The sensor has applications in laser safety, video surveillance and pattern recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Infrared Eye project was developed at DRDC Valcartier to improve the efficiency of airborne search and rescue operations. A high performance opto-mechanical pointing system was developed to allow fast positioning of a narrow field of view with high resolution, used for search and detection, over a wide field of view of lower resolution that optimizes area coverage. This system also enables the use of a step-stare technique, which rapidly builds a large area coverage image mosaic by step-staring a narrow field camera and properly tiling the resulting images. The resulting image mosaic covers the wide field of the current Infrared Eye, but with the high resolution of the narrow field. For the desired application, the camera will be fixed to an airborne platform using a stabilized mount and image positioning in the mosaic will be calculated using flight data provided by an altimeter, a GPS and an inertial unit. This paper presents a model of the complete system, a dynamic step-stare strategy that generates the image mosaic, a flight image taking simulator for strategy testing and some results obtained with this simulator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.