PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8396, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Architectures for Geospatial Collection Applications
A prototype Video Imagery Ontology has been developed to
derive video imagery intelligence, VideoIMINT. The ontology includes
the development of classes and properties to address video image
content, and video collection metadata related to platforms, sensors and
collection operations. Preliminary feature extraction of video imagery
content classes was functionally utilized to identify important video
segments in an integrated viewer. Integrated data storage systems and
fusion processes are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aerial wide-area monitoring and tracking using multi-camera arrays poses unique challenges compared to stan-
dard full motion video analysis due to low frame rate sampling, accurate registration due to platform motion, low
resolution targets, limited image contrast, static and dynamic parallax occlusions.1{3 We have developed a low
frame rate tracking system that fuses a rich set of intensity, texture and shape features, which enables adaptation
of the tracker to dynamic environment changes and target appearance variabilities. However, improper fusion and
overweighting of low quality features can adversely aect target localization and reduce tracking performance.
Moreover, the large computational cost associated with extracting a large number of image-based feature sets
will in
uence tradeos for real-time and on-board tracking. This paper presents a framework for dynamic online
ranking-based feature evaluation and fusion in aerial wide-area tracking. We describe a set of ecient descriptors
suitable for small sized targets in aerial video based on intensity, texture, and shape feature representations or
views. Feature ranking is then used as a selection procedure where target-background discrimination power for
each (raw) feature view is scored using a two-class variance ratio approach. A subset of the k-best discriminative
features are selected for further processing and fusion. The target match probability or likelihood maps for
each of the k features are estimated by comparing target descriptors within a search region using a sliding win-
dow approach. The resulting k likelihood maps are fused for target localization using the normalized variance
ratio weights. We quantitatively measure the performance of the proposed system using ground-truth tracks
within the framework of our tracking evaluation test-bed that incorporates various performance metrics. The
proposed feature ranking and fusion approach increases localization accuracy by reducing multimodal eects due
to low quality features or background clutter. Adaptive feature ranking increases the robustness of the tracker
in dynamically changing environments especially when the object appearance is changing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Metadata are often referred to as data about data. In this respect metadata represent additional information about data,
which could otherwise be limited in use without such auxiliary information. Metadata can help to interpret the content of
data resources like images and videos, but can also provide support in the management, dissemination, and search and
retrieval of data sets through elements like creator, addressees, access rights, security classifications or online-location.
An issue arises from the fact that different metadata models are encouraged and used by different communities of
interests (COIs). These differences in metadata models hinder the efficient establishment of common, unified metadata
across COIs. This paper justifies and recommends a possible solution for a harmonized core metadata model between
selected COIs with special focus on the use case "cross-community search and retrieval of geo-related ISR data".
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Persistent aerial surveillance is an emerging technology that can provide continuous, wide-area coverage from an
aircraft-based multiple-camera system. Tracking targets in these data sets is challenging for vision algorithms due
to large data (several terabytes), very low frame rate, changing viewpoint, strong parallax and other imperfections
due to registration and projection. Providing an interactive system for automated target tracking also has
additional challenges that require online algorithms that are seamlessly integrated with interactive visualization
tools to assist the user. We developed an algorithm that overcomes these challenges and demonstrated it on data
obtained from a wide-area imaging platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space-borne Synthetic Aperture Radar (SAR) sensors, such as RADARSAT-1 and -2, enable a multitude of defense and
security applications owing to their unique capabilities of cloud penetration, day/night imaging and multi-polarization
imaging. As a result, advanced SAR image time series exploitation techniques such as Interferometric SAR (InSAR) and
Radargrammetry are now routinely used in applications such as underground tunnel monitoring, infrastructure
monitoring and DEM generation. Imaging geometry, as determined by the satellite orbit and imaged terrain, plays a
critical role in the success of such techniques.
This paper describes the architecture and the current status of development of a geometry-based search engine that
allows the search and visualization of archived and future RADARSAT-1 and -2 images appropriate for a variety of
advanced SAR techniques and applications. Key features of the search engine's scalable architecture include (a)
Interactive GIS-based visualization of the search results; (b) A client-server architecture for online access that produces
up-to-date searches of the archive images and that can, in future, be extended to acquisition planning; (c) A techniquespecific
search mode, wherein an expert user explicitly sets search parameters to find appropriate images for advanced
SAR techniques such as InSAR and Radargrammetry; (d) A future application-specific search mode, wherein all search
parameters implicitly default to preset values according to the application of choice such as tunnel monitoring, DEM
generation and deformation mapping; (f) Accurate baseline calculations for InSAR searches, and, optimum beam
configuration for Radargrammetric searches; (g) Simulated quick look images and technique-specific sensitivity maps in
the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Naval Research Laboratory has developed and demonstrated an autonomous multi-sensor motion-tracking and
interrogation system that reduces the workload for analysts by automatically finding moving objects, and then
presenting high-resolution images of those objects with little-to-no human input. Intelligence, surveillance and
reconnaissance (ISR) assets in the field generate vast amounts of data that can overwhelm human operators and can
severely limit an analyst's ability to generate intelligence reports in operationally relevant timeframes. This multiuser
tracking capability enables the system to manage the collection of imagery without continuous monitoring by a
ground or airborne operator, thus requiring fewer personnel and freeing up operational assets. During flight tests,
March 2011, multiple real-time motion-target-indicator (MTI) tracks generated by a wide-area persistent
surveillance sensor (WAPSS) were autonomously cross-cued to a high-resolution narrow filed-of-view interrogation
sensor via an airborne network. Both sensors were networked by the high-speed Tactical Reachback Extended
Communications (TREC) data-link provided by the NRL Information Technology Division.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Military, police, and industrial surveillance operations could benefit from having sensors deployed in configurations
that maximize collection capability. We describe a surveillance planning approach that optimizes sensor
placements to collect information about targets of interest by using information from predictive geospatial analytics,
the physical environment, and surveillance constraints. We designed a tool that accounts for multiple
sensor aspects-collection footprints, groupings, and characteristics; multiple optimization objectives-surveillance
requirements and predicted threats; and multiple constraints-sensing, physical environment (including terrain),
and geographic surveillance constraints. The tool uses a discrete grid model to keep track of geographic sensing
objectives and constraints, and from these, estimate probabilities for collection containment and detection. We
devised an evolutionary algorithm and polynomial time approximation schemes (PTAS) to optimize the tool
variables above to generate the positions and aspect for a network of sensors. We also designed algorithms
to coordinate a mixture of sensors with different competing objectives, competing constraints, couplings, and
proximity constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial information systems provide a unique frame of reference to bring together a large and diverse set of data from
a variety of sources. However, automating this process remains a challenge since: 1) data (particularly from sensors) is
error prone and ambiguous, 2) analysis and visualization tools typically expect clean (or exact) data, and 3) it is difficult
to describe how different data types and modalities relate to each other. In this paper we describe a data integration
approach that can help address some of these challenges. Specifically we propose a light weight ontology for an
Information Space Model (ISM). The ISM is designed to support functionality that lies between data catalogues and
domain ontologies. Similar to data catalogues, the ISM provides metadata for data discovery across multiple,
heterogeneous (often legacy) data sources e.g. maps servers, satellite images, social networks, geospatial blogs. Similar
to domain ontologies, the ISM describes the functional relationship between these systems with respect to entities
relevant to an application e.g. venues, actors and activities. We suggest a minimal set of ISM objects, and attributes for
describing data sources and sensors relevant to data integration. We present a number of statistical relational learning
techniques to represent and leverage the combination of deterministic and probabilistic dependencies found within the
ISM. We demonstrate how the ISM provides a flexible language for data integration where unknown or ambiguous
relationships can be mitigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Search, Visualization, and Dissemination Methods
Proposed is a new technique for simulating nighttime scenes with realistically-modelled urban radiance. While
nightlight imagery is commonly used to measure urban sprawl,1 it is uncommon to use urbanization as metric
to develop synthetic nighttime scenes. In the developed methodology, the open-source Open Street Map (OSM)
Geographic Information System (GIS) database is used. The database is comprised of many nodes, which are
used to dene the position of dierent types of streets, buildings, and other features. These nodes are the driver
used to model urban nightlights, given several assumptions.
The rst assumption is that the spatial distribution of nodes is closely related to the spatial distribution of
nightlights. Work by Roychowdhury et al has demonstrated the relationship between urban lights and development.
2 So, the real assumption being made is that the density of nodes corresponds to development, which is
reasonable. Secondly, the local density of nodes must relate directly to the upwelled radiance within the given
locality. Testing these assumptions using Albuquerque and Indianapolis as example cities revealed that dierent
types of nodes produce more realistic results than others. Residential street nodes oered the best performance
for any single node type, among the types tested in this investigation. Other node types, however, still provide
useful supplementary data.
Using streets and buildings dened in the OSM database allowed automated generation of simulated nighttime
scenes of Albuquerque and Indianapolis in the Digital Imaging and Remote Sensing Image Generation (DIRSIG)
model. The simulation was compared to real data from the recently deployed National Polar-orbiting Operational
Environmental Satellite System(NPOESS) Visible Infrared Imager Radiometer Suite (VIIRS) platform. As a
result of the comparison, correction functions were used to correct for discrepancies between simulated and
observed radiance. Future work will include investigating more advanced approaches for mapping the spatial
extent of nightlights, based on the distribution of dierent node types in local neighbourhoods. This will allow
the spectral prole of each region to be dynamically adjusted, in addition to simply modifying the magnitude of
a single source type.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The H.264 protocol for high resolution video offers several enhancements which can be leveraged for the selective
tracking and focused resolution of disjoint macro-blocks of the frame sequence such that a smooth degradation of
context is achieved at significant compression rates. We demonstrate the near real time temporal and spatial foveation of
the video stream. Tracking results produced by spatial statistics of the georegistered motion vectors of the H.264 frames
are useful for change detection and background discrimination as well as temporal foveation. Finally, we discuss the
online analytical processing of the spatial database of full motion video through use of the automatically generated
geospatial statistical descriptor metadata.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatially querying and analyzing large high-resolution spatial networks is critical to most of defense and security
applications to support military intelligence. However, the majority of existing solutions either store the entire
network in memory, which is not scalable, or adopt a disk-based network representation (i.e., SNDB), where routing
and spatial queries may incur high I/O overhead and hence are inefficient. In this paper, we present a flexible
architecture for large spatial network storage using quadtree. In particular, this hybrid approach preserves network
connectivity and proximity within each partition for local search while enabling heuristics to minimize the I/O
overhead for queries of large scale. We further develop efficient algorithms to process spatial queries based on this
hybrid storage schema.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Motion Imagery Standards Board (MISB) is engaged in multiple initiatives that may provide support for
Activity-Based GeoINT (ABG). This paper describes a suite of approaches based on previous MISB work on a
standards-based architecture for tracking. It focuses on ABG in the context of standardized tracker results, and shows
how the MISB tracker formulation can formalize important components of the ABG problem. The paper proposes a
grammar-based formalism for the reporting of activities within a stream of FMV or wide-area surveillance data.
Such a grammar would potentially provide an extensible descriptive language for ABG across the community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial Data Processing Algorithms and Techniques
The topic of data uncertainty handling is relevant to essentially any scientific activity that involves making
measurements of real world phenomena. A rigorous accounting of uncertainty can be crucial to the decision-making
process. The purpose of this paper is to provide a brief overview on select issues in handling uncertainty in geospatial
data. We begin with photogrammetric concepts of uncertainty handling, followed by investigating uncertainty issues
related to processing vector (object) representations of geospatial information. Suggestions are offered for enhanced
modeling, visualization, and exploitation of local uncertainty information in applications such as fusion and conflation.
Stochastic simulation can provide an effective approach to improve understanding of the consequences uncertainty
propagation in common geospatial processes such as path finding. Future work should consider the development of
standardized modeling techniques for stochastic simulation for more complex object data, to include spatial and attribute
information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During aerial orbital reconnaissance, a sensor system is mounted on an airborne platform for imaging a region on the
ground. The latency between the image acquisition and delivery of information to the end-user is critical and must be
minimized. Due to fine ground pixel resolution and a large field-of-view for wide-area surveillance applications, a
massive volume of data is gathered and imagery products are formed using a real-time multi-processor system. The
images are taken at oblique angles, stabilized and ortho-rectified. The line-of-sight of the sensor to the ground is often
interrupted by terrain features such as mountains or tall structures as depicted in Figure1. The ortho-rectification process
renders the areas hidden from the line-of sight of the sensor with spurious information. This paper discusses an approach
for addressing terrain masking in size, weight, and power (SWaP) and memory-restricted onboard processing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As forecast by the United Nations in May 2007, the population of the world transitioned from a rural to an urban
demographic majority with more than half living in urban areas.1 Modern urban environments are complex 3-
dimensional (3D) landscapes with 4-dimensional patterns of activity that challenge various traditional 1-dimensional and
2-dimensional sensors to accurately sample these man-made terrains. Depending on geographic location, data resulting
from LIDAR, multi-spectral, electro-optical, thermal, ground-based static and mobile sensors may be available with
multiple collects along with more traditional 2D GIS features. Reconciling differing data sources over time to correctly
portray the dynamic urban landscape raises significant fusion and representational challenges particularly as higher
levels of spatial resolution are available and expected by users. This paper presents a framework for integrating the
imperfect answers of our differing sensors and data sources into a powerful representation of the complex urban
environment. A case study is presented involving the integration of temporally diverse 2D, 2.5D and 3D spatial data
sources over Kandahar, Afghanistan. In this case study we present a methodology for validating and augmenting
2D/2.5D urban feature and attribute data with LIDAR to produce validated 3D objects. We demonstrate that nearly 15%
of buildings in Kandahar require understanding nearby vegetation before 3-D validation can be successful. We also
address urban temporal change detection at the object level. Finally we address issues involved with increased sampling
resolution since urban features are rarely simple cubes but in the case of Kandahar involve balconies, TV dishes, rooftop
walls, small rooms, and domes among other things.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geospatial data analysis relies on Spatial Data Fusion and Mining (SDFM), which heavily depend on topology and
geometry of spatial objects. Capturing and representing geometric characteristics such as orientation, shape, proximity,
similarity, and their measurement are of the highest interest in SDFM. Representation of uncertain and dynamically
changing topological structure of spatial objects including social and communication networks, roads and waterways
under the influence of noise, obstacles, temporary loss of communication, and other factors. is another challenge. Spatial
distribution of the dynamic network is a complex and dynamic mixture of its topology and geometry. Historically,
separation of topology and geometry in mathematics was motivated by the need to separate the invariant part of the
spatial distribution (topology) from the less invariant part (geometry). The geometric characteristics such as orientation,
shape, and proximity are not invariant. This separation between geometry and topology was done under the assumption
that the topological structure is certain and does not change over time. New challenges to deal with the dynamic and
uncertain topological structure require a reexamination of this fundamental assumption. In the previous work we
proposed a dynamic logic methodology for capturing, representing, and recording uncertain and dynamic topology and
geometry jointly for spatial data fusion and mining. This work presents a further elaboration and formalization of this
methodology as well as its application for modeling vector-to-vector and raster-to-vector conflation/registration
problems and automated feature extraction from the imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Smartphones are becoming popular nowadays not only because of its communication functionality but also, more
importantly, its powerful sensing and computing capability. In this paper, we describe a novel and accurate image and
video based remote target localization and tracking system using the Android smartphones, by leveraging its built-in
sensors such as camera, digital compass, GPS, etc. Even though many other distance estimation or localization devices
are available, our all-in-one, easy-to-use localization and tracking system on low cost and commodity smartphones is
first of its kind. Furthermore, smartphones' exclusive user-friendly interface has been effectively taken advantage of by
our system to facilitate low complexity and high accuracy. Our experimental results show that our system works
accurately and efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SIPHER was first revealed in a US Air Force Research Laboratory Information Directorate (AFRL/RIEC)
project concerned with polarimetric and SAR processing techniques. It is a means to make objects in a digital image
vary in intensity (amplitude) with respect to other objects or backgrounds, in an unusual manner which promotes object
or target cognitive perception. We describe this phenomenon as objects being in or out of spatial intensity phase with
one another, somewhat analogous to how different signals' amplitudes differ at any instance due to their relative phases.
Simple surface reflectivity and a single, static illumination source provide no special means to distinguish
objects from backgrounds, other than their reflectivity differences. However, if different surfaces are illuminated from
different source positions or with different amplitudes, like from a moving spotlight, different pixels with the same
reflectivity may have different amplitudes at different instances within the source's dynamic behavior. The problem is
that we cannot necessarily control source dynamics or collect images over sufficient time to benefit from these dynamics.
SIPHER simulates source dynamics in a single, static image. It creates apparent reflectivity changes in an
image taken at one instance, as if the illumination source's intensity and position was changing, as a function of
algorithm threshold settings. This produces a series of processed images wherein object and background pixel
amplitudes are out of phase with one another due to their orientation and surface characteristics (flat, curved, etc.), and
become more perceptible. Cognitive perception is enhanced by creating a video sequence of the processed image series.
This produces an apparent motion effect in the object relative to its surroundings, or renders an apparent threedimensional
effect where the object appears to "jump out" from its surroundings.
We first define this spatial intensity phase quantity mathematically, then compare it to conventional signal
phase relationships, and finally apply it to some images to demonstrate its behavior. We also discuss anticipated
enhancement and normalization techniques which may improve the technique in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.