PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7307, including the Title Page, Copyright information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deployable polarimetric imaging systems often use 2×2 arrays of linear polarizers at the pixel level to measure the
polarimetric signature. This architecture is referred to as a micro-grid polarizer array (MPA). MPAs are either bonded to
or fabricated directly upon focal plane arrays. A key challenge to obtaining polarimetric measurements of sub-pixel
targets using MPAs is registering the signals from each of the independent channels. Digital Fusion Solutions, Inc has
developed a micro-optic approach to register the fields of view of 2x2 subarrays of pixels and incorporated the device
into the design of a polarimetric imager. Results of the design will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ball Aerospace & Technologies Corp. has combined the results of recent advances in CMOS imaging sensor, signal
processing and embedded computing technologies to produce a new high performance military video camera. In this
paper we present the design features and performance characteristics of this new, large format camera which was
developed for use in military airborne intelligence, surveillance and reconnaissance (ISR), targeting and pilotage
applications. This camera utilizes a high sensitivity CMOS detector array with low read noise, low dark current and
large well capacity to provide high quality image data under low-light and high intra-scene dynamic range illumination
conditions. The camera utilizes sensor control electronics and an advanced digital video processing chain to maximize
the quality and utility of the digital images produced by the CMOS device. Key features of this camera include: rugged,
small physical size, wide operating temperature range, low operating power, high frame rate and automatic gain control
for all-light-level applications. This camera also features a novel pixel decimation filter to provide custom image sizes
and video output formats.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, high performance visible and IR cameras have been used widely for tactical airborne reconnaissance.
The process improvement for efficient discrimination and analysis of complex target information from active battlefields
requires for simultaneous multi-band measurement from airborne platforms at various altitudes. We report a new dual
band airborne camera designed for simultaneous registration of both visible and IR imagery from mid-altitude ranges.
The camera design uses a common front end optical telescope of around 0.3m in entrance aperture and several relay
optical sub-systems capable of delivering both high spatial resolution visible and IR images to the detectors. The camera
design is benefited from the use of several optical channels packaged in a compact space and the associated freedom to
choose between wide (~3 degrees) and narrow (~1 degree) field of view. In order to investigate both imaging and
radiometric performances of the camera, we generated an array of target scenes with optical properties such as reflection,
refraction, scattering, transmission and emission. We then combined the target scenes and the camera optical system into
the integrated ray tracing simulation environment utilizing Monte Carlo computation technique. Taking realistic
atmospheric radiative transfer characteristics into account, both imaging and radiometric performances were then
investigated. The simulation results demonstrate successfully that the camera design satisfies NIIRS 7 detection criterion.
The camera concept, details of performance simulation computation, the resulting performances are discussed together
with future development plan.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are few choices when identifying detector materials for use in the SWIR wavelength band. We have exploited the
direct-bandgap InGaAs material system to achieve superior room temperature (293°K) dark current. We have
demonstrated sensitivity from 400nm through 2.6um with this material system and thus provide the opportunity to sense
not only the visible, but also the J-band (1.25um), H-band (1.65um) and K-band (2.2um) windows. This paper discusses
the advantages of our hybridized CMOS-InGaAs material system versus other potential SWIR material systems.
The monolithic planar InGaAs detector array enables 100% fill factor and thus, high external quantum efficiency. We
have achieved room-temperature pixel dark current of 2.8fA and shot noise of 110 electrons per pixel per second. Low
dark current at +300K allows uncooled packaging options, affording the system designer dramatic reductions in size,
weight (cameras <28grams), and power (<2.5W). Commercially available InGaAs pin arrays have shown diode lifetime
mean time between failures (MTBF) of 1011hours for planar InGaAs detectors1, far exceeding telecom-grade reliability
requirements. The use of a hybrid CMOS-InGaAs system allows best of breed materials to be used and permits efficient, cost-effective,
volume integration. Moreover, we will discuss how the InGaAsP material system is compatible with CMOS monolithic
integration. Taken together, these advantages, we believe, make InGaAs the obvious choice for all future SWIR
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For unattended persistent surveillance there is a need for a system which provides the following information: target
classification, target quantity estimate, cargo presence and characterization, direction of travel, and action. Over highly
bandwidth restricted links, such as Iridium, SATCOM or HF, the data rates of common techniques are too high, even
after aggressive compression, to deliver the required intelligence in a timely, low power manner. We propose the
following solution to this data rate problem: Profile Video. Profile video is a new technique which provides all of the
required information in a very low data-rate package.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several areas of unmanned aerial vehicle (UAV) performance need to be improved for the next generation of UAVs to
be used successfully in expanded future combat roles. This paper describes the initial research to improve the
performance of UAVs through the use of pressurized structures-based (PSB) technologies. Basically, the UAV will be
constructed in such a way that a considerable percentage of its weight will be supported by or composed of inflatable
structures containing air or helium. PSB technology will reduce the amount of energy required to keep the UAV aloft
thus allowing the use of smaller, slower, and quieter motors. Using PSB technology in tandem with improving
technologies in electronics, energy storage, and materials should provide a substantial increase over current UAV
performance in areas of need to the military.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US military has recently taken tactical steps to increase its ISR capabilities to support military operations. Due to
the dynamic capabilities of the terrorist threat, there is a need for a payload- and airframe-agnostic, rapid-deployment
sensor system that can be used on multiple airframes for in-theater missions and for the test and evaluation of sensors
prior to fielding. This "plug-and-play" system, based upon the Oculus Sensor Deployment System technology, uses a
system-of-systems approach to modularize the base platform, thereby allowing the system to conform to aircraft such as
the C-130, C-27, V-22, CH-47, CH-53 and CASA-235 without any modification to the airframe itself. This type of
system can be used as (1) a versatile, cost-effective test and evaluation platform for current and developmental sensors as
well as (2) an in-theater ISR asset that can be used on readily available airframes at a particular location.
This paper illustrates the CONUS and OCONUS mission potential of this multi-airframe system and outlines the novel
design characteristics that the Airframe Agnostic Roll-on/Roll-off (AA-RORO) sensor platform incorporates to make it
the most versatile, rapid-deployment sensor platform available to support near-term U.S. military operations. The
system concept was developed with the support of and input from multiple military agencies and the respective branches
they represent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a method for 3D-structure extraction from the image data of a flying platform equipped with an IR-camera.
Due to the large distance of the camera to the target, trajectories with limited perspective variation and low resolution
cameras the task is challenging. Our method is based on the extraction and tracking of line segments together with
junction points of line segments. These tracks are afterwards used for 3D-reconstruction. In a second step knowledge
about typical properties of man made objects is incorporated in the reconstruction results to generate intrinsically
consistent structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne laser terrain mapping systems have redefined the realm of topographic mapping. Lidars with kilohertz
collection rates and long ranges have made airborne surveying a quick, efficient and highly productive endeavor. Despite
the current industry efforts toward improving airborne lidar range, collection rate, resolution and accuracies, and with the
advent of Unmanned Aerial Vehicles (UAVs) and their myriad advantages, military and civil applications alike are
looking for very compact and rugged lidar systems that can fit within the tight volumetric, form-factor, mass and power
constraints imposed by UAVs.
Optech has developed a very compact airborne laser terrain mapper that's geared toward UAV deployment. The system
is composed of a highly integrated unit that combines a lidar transceiver, a position orientation sensor and control
electronics in a 1 cubic foot - 57 lb package. Such level of compactness is achieved by employing the latest laser
technology trends along with featuring very compact optical design, and using the latest control and data collection
architecture technology. This paper describes the UAV requirements that drove the system design, the technology
employed and optimizations implemented in the system to achieve its ultra-compact size.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The overall goal of the research project reported here is to create a novel system that can combine input from multiple
passive sensors at different viewpoints (such as uninhabited aerial vehicles) into a single integrated three-dimensional
(3D) view of a scene. This form of intelligent data processing, known as Volume Registration, can further exploit the
available information to enable improved surveillance, reconnaissance and situational awareness, and thus offers
substantial potential benefit to military applications. This paper focuses on the case of multiple sensors onboard UAVs
operating at mid-altitude, and describes two complementary techniques that have been investigated in parallel to address
this challenge. The first of these is depth from disparity, which allows a real-time per-pixel estimation of the distance of
scene objects from the camera; the second is shape from silhouette, which back-projects a segmented version of the
image onto a 3D block of voxels and 'carves' a 3D model over multiple frames. The main steps of each algorithm are
outlined, along with appropriate results, in order to demonstrate how they could form a useful part of a practical Volume
Registration system. A number of possible extensions and improvements to the system architecture are also discussed to
improve the accuracy and efficiency of these techniques, and their applicability to the more complex low-altitude case is
discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an analytic, filtered-backprojection (FBP) type inversion method for bistatic synthetic aperture
radar (BISAR) when the measurements have been corrupted by noise and clutter. The inversion method uses
microlocal analysis in a statistical setting to design a backprojection filter that reduces the impact of noise and
clutter while preserving the fidelity of the target image. We assume an isotropic single scattering model for the
electromagnetic radiation that illuminates the scene of interest. We assume a priori statistical information on
the target, clutter and noise. We demonstrate the performance of the algorithm and its ability to better resolve
targets through numerical simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider a bistatic synthetic aperture radar (BiSAR) system operating in non-ideal imaging conditions with
receive and transmit antennas traversing arbitrary flight trajectories over a non-flat topography; transmitting
arbitrary waveforms along flight trajectories etc. In1 we developed a generalized filtered-backprojection (GFBP)
method for BiSAR image formation applicable to such non-ideal imaging scenarios. The method puts edges not
only at the right location and orientation, but also at the right strength resulting in true amplitude images. The
main computational complexity of the GFBP method comes from the spatially dependent filtering step. In this
work, we present an alternative, novel FBP method applicable to non-ideal imaging scenarios resulting in true
amplitude images. The method involves ramp filtering in data domain and image domain scaling. Additionally,
the method results in fast, computationally efficient implementation than that of GFBP methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Surveillance and tracking of targets such as sensor fused warheads (SFWs) and unmanned aerial vehicles (UAVs) has
been a challenging task, especially in the presence of multiple targets moving at a relatively fast speed. Due to the
insufficient wavelength resolution, conventional radar technology may fail to resolve closely located targets or lack
spatial resolution for specific target identification. There is a need for the development of an innovative sensor that is
able to recognize and track closely related targets. To address this need, we have developed a target sensor that combines
vision and laser ranging technologies for the detection and tracking of multiple targets with wide viewing angle and high
spatial resolution. Using this sensor, regions-of-interest (ROIs) in the global scene are first selected, and then each ROI
is subsequently zoomed with vision technique to provide high spatial resolution for target recognition or identification.
Moreover, vision technique provides the azimuth and elevation angles of targets to a laser range finder for target distance
determination. As a result, continuous three-dimensional target tracking can be potentially achieved with the proposed
sensor. The developed sensor can be suitable for a wide variety of military and defense related applications. The design
and construction of a proof-of-concept target tracking sensor is described. Basic performance of the constructed target
tracking sensor including field-of-view, resolution, and target distance are presented. The potential military and defense
related applications of this technology are highlighted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Video National Imagery Interpretability Rating Standard (V-NIIRS) consists of a ranked set of subjective criteria to
assist analysts in assigning an interpretability quality level to a motion imagery clip. The V-NIIRS rating standard is
needed to support the tasking, retrieval, and exploitation of motion imagery. A criteria survey was conducted to yield
individual pair-wise criteria rankings and scores. Statistical analysis shows good agreement with expectations across
the 9-levels of interpretability, for each of the 7 content domains.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have conducted an evaluation comparing the interpretability potential of two standardized HD video formats,
1080p30 and 720p60. Despite the lack of an existing motion imagery (MI) quality scale akin to the NIIRS scale, we
have exploited previous work on MI scale development in measuring critical imagery parameters affecting
interpretability. We developed a collection of MI clips that covers a wide parameter range. These well-characterized
clips provide the basis for relating perceived imagery interpretability to MI parameters, including resolution (related to
ground sample distance, GSD) and frame rate, and to target parameters such as motion and scene complexity. This report
presents key findings about the impact of resolution and frame rate on interpretability. Neither format is uniformly
preferred, but the analysis quantifies the interpretability difference between the formats and finds there are significant
effects of target motion and target size on the format preferences of the imagery analysts. The findings have implications
for sensor system design, systems architecture, and mission planning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, we see an increase of interest for efficient tracking systems in surveillance applications. Many of the
proposed techniques work well for good quality images and when objects are within a certain size. When dealing with
UAV or surveillance cameras, the images are noisy and many techniques fail to detect and track the real moving objects.
This work presents a tracking technique based on a combined spatial and temporal wavelet processing of the image
sequence. For sequences coming from an UAV, images are rectified using detected features in the scene. A modified
Harris corner detector is used to select points of interest. Regions around these points are matched in successive frames
in order to find the transformations between successive images. These transformations are used to stabilize the images
and to build a complete scene mosaic from the original sequence during the object tracking.
A spatial discrete wavelet transform is then used to extract potential target regions. These detections are refined using a
temporal wavelet transform. Mathematical morphology is then used to eliminate targets resulting from image noise. The
remaining targets are further processed using Kalman filter. A refinement selection strategy is then performed to keep
only the targets obtaining the highest scores.
The obtained results are promising and show the possibility of efficiently tracking moving objects in noisy images
captured by a moving camera. Also, the proposed technique works efficiently with noisy infrared sequences captured by
a surveillance system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present an improved target tracking algorithm in aerial video. An adaptive appearance
model is incorporated in Sequential Monte Carlo framework to infer the deformation (or tracking)
parameter best describing the differences between the observed appearances of the target and the
appearance model. The appearance model of the target is adaptively updated based on the tracking
result up to the current frame, balancing a fixed model and the dynamic model with a pre-defined
forgetting parameter. For targets in the aerial video, an affine model is accurate enough to describe the
transformation of the targets across frames. Particles are formed with the elements of the affine model.
To accommodate the dynamics embedded in the video sequence, we employ a state space time series
model, and the system noise constrains the particle coverage. Instead of directly using the affine
parameters as elements of particles, each affine matrix is decomposed into two rotation angles, two
scales and the translation parameter, which form the particles with more geometrical meaning. Larger
variances are given to the translation parameter and the rotation angles, which greatly improve the
tracking performance compared with treating these parameters equally, especially for the fast rotating
targets. Experimental results show that our approach provides high performance for target tracking in
aerial video.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airborne surveillance and reconnaissance are essential for successful military missions. Such capabilities are critical for
force protection, situational awareness, mission planning, damage assessment and others. UAVs gather huge amount of
video data but it is extremely labour-intensive for operators to analyse hours and hours of received data.
At MDA, we have developed a suite of tools towards automated video exploitation including calibration, visualization,
change detection and 3D reconstruction. The on-going work is to improve the robustness of these tools and automate the
process as much as possible. Our calibration tool extracts and matches tie-points in the video frames incrementally to
recover the camera calibration and poses, which are then refined by bundle adjustment. Our visualization tool stabilizes
the video, expands its field-of-view and creates a geo-referenced mosaic from the video frames.
It is important to identify anomalies in a scene, which may include detecting any improvised explosive devices (IED).
However, it is tedious and difficult to compare video clips to look for differences manually. Our change detection tool
allows the user to load two video clips taken from two passes at different times and flags any changes between them.
3D models are useful for situational awareness, as it is easier to understand the scene by visualizing it in 3D. Our 3D
reconstruction tool creates calibrated photo-realistic 3D models from video clips taken from different viewpoints, using
both semi-automated and automated approaches. The resulting 3D models also allow distance measurements and line-of-
sight analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the problem of assigning tasks to a variety of differently-configured aircraft - aircraft of different
types and carrying very different weapon loads. A multi-objective optimization algorithm is proposed which takes into
account all of the relevant properties of the aircraft and the available weapons. Specifically, it includes limitations due to
the aircraft's speed, time on station and the number of weapons available. The algorithm also allows for the need to
define different priorities for different targets and requirements for co-operative laser designation for certain targets. The
paper also discusses the need for supplementary algorithms to validate the optimal solution proposed by the assignment
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a vision-based street detection algorithm to be used by small autonomous aircraft in low-altitude
urban surveillance. The algorithm uses Bayesian analysis to differentiate between street and background pixels. The
color profile of edges on the detected street is used to represent objects with respect to their surroundings. These color
profiles are used to improve street detection over time. Pixels that do not likely originate from the "true" street are
excluded from the recurring Bayesian estimation in the video. Results are presented comparing to a previously published
Unmanned Aerial Vehicle (UAV) road detection algorithm. Robust performance is demonstrated with urban surveillance
scenes including UAV surveillance, police chases from helicopters, and traffic monitoring. The proposed method is
shown to be robust to data uncertainty and has low sensitivity to the training dataset. Performance is computed using a
challenging multi-site dataset that includes compression artifacts, poor resolution, and large variation of scene
complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shadows and shadings are typical natural phenomena, which can often be found in images and videos acquired under
strong directional lighting, such as those taken outdoors on a sunny day. Unfortunately, shadows can cause many
difficulties in image processing and vision-related tasks, such like image segmentation and object recognition. Therefore,
shadow removal is needed for improving the performance of these image understanding tasks. We present a new shadow
removal algorithm for real textured color images. The algorithm is based on the statistical property of textures in images.
The experimental results on real-world data are shown to demonstrate this algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ITT has developed and demonstrated a real-time airborne data management system that ingests, compresses, stores, and
streams imagery and video data from sensors based on users' needs. The data management system was designed to be
sensor agnostic, which was demonstrated when ITT quickly integrated several different cameras including an HD video
camera, an IR video camera, and large framing cameras. The data is compressed in real-time using ITT's high-speed
JPEG 2000 compression core and stored in the airborne unit. The data is then interactively served to users over downlink
communication based on the users' requests. This system's capability was demonstrated in several test flights where data
was collected from the sensors at 132 megapixels per second (1.5 gigabits per second), compressed, stored, and
interactively served as regions of interest to multiple users over a 48 megabit/second communication link. This data
management system is currently being incorporated into airborne systems for military and civil applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.