We consider a challenge problem involving the automatic detection of large commercial vehicles such as trucks, buses, and tractor-trailers in Quickbird EO pan imagery. Three target classifiers are evaluated: a “bagged” perceptron algorithm (BPA) that uses an ensemble method known as bootstrap aggregation to increase classification performance, a convolutional neural network (CNN) implemented using the MobileNet architecture in TensorFlow, and a memory-based classifier (MBC), which also uses bagging to increase performance. As expected, the CNN significantly outperformed the BPA. Surprisingly, the performance of the MBC was only slightly below that of the CNN. We discuss these results and their implications for this and other similar applications.
Vector shoreline (VSL) data is potentially useful in ATR systems that distinguish between objects
on land or water. Unfortunately available data such as the NOAA 1:250,000 World Vector
Shoreline and NGA Prototype Global Shoreline data cannot be used by themselves to make a
land/water determination because of the manner in which the data are compiled. We describe a
data fusion approach for creating labeled VSL data using test points from Global 30 Arc-Second
Elevation (GTOPO30) data to determine the direction of vector segments; i.e., whether they are
in clockwise or counterclockwise order. We show consistently labeled VSL data be used to easily
determine whether a point is on land or water using a vector cross product test.
Automatic target recognition (ATR) systems, like human photo-interpreters, rely on a variety of
visual information for detecting, classifying, and identifying manmade objects in aerial imagery.
We describe the integration of a visual learning component into the Image Data Conditioner
(IDC) for target/clutter and other visual classification tasks. The component is based on an
implementation of a model of the visual cortex developed by Serre, Wolf, and Poggio. Visual
learning in an ATR context requires the ability to recognize objects independent of location,
scale, and rotation. Our method uses IDC to extract, rotate, and scale image chips at candidate
target locations. A bootstrap learning method effectively extends the operation of the classifier
beyond the training set and provides a measure of confidence. We show how the classifier can
be used to learn other features that are difficult to compute from imagery such as target
direction, and to assess the performance of the visual learning process itself.
A
video
data
conditioner
(VDC)
for
automated
full-motion
video
(FMV)
detection,
classification,
and
tracking
is
described.
VDC
extends
our
multi-stage
image
data
conditioner
(IDC)
to
video.
Key
features
include
robust
detection
of
compact
objects
in
motion
imagery,
coarse
classification
of
all
detections,
and
tracking
of
fixed
and
moving
objects.
An
implementation
of
the
detection
and
tracking
components
of
the
VDC
on
an
Apple
iPhone
is
discussed.
Preliminary
tracking
results
of
naval
ships
captured
during
the
Phoenix
Express
2009
Photo
Exercise
are
presented.
KEYWORDS: Global Positioning System, Cameras, Magnetometers, Visualization, Mobile devices, Overlay metrology, Sensors, Geographic information systems, 3D visualizations, Image display
Locative Viewing is a method for visualizing geographically-referenced 3-D objects in the local coordinate system of a geographically-referenced observer. A computer-graphics rendering of nearby geo-objects is superimposed over the visual surroundings of the observer as seen by a camera. This rendering changes as the observer moves. Locative viewing can be accomplished with a mobile device that 1) is able to determine its geographic location, and orientation, 2) contains a camera and image display, and 3) can project and overlay objects within the field of view of the camera with the camera image. A preliminary implementation of a locative viewer using Apple's iPhone is described and results presented.
The automatic detection and classification of manmade objects in overhead imagery is key to generating
geospatial intelligence (GEOINT) from today's high space-time bandwidth sensors in a timely manner. A
flexible multi-stage object detection and classification capability known as the IMINT Data Conditioner
(IDC) has been developed that can exploit different kinds of imagery using a mission-specific processing
chain. A front-end data reader/tiler converts standard imagery products into a set of tiles for processing,
which facilitates parallel processing on multiprocessor/multithreaded systems. The first stage of processing
contains a suite of object detectors designed to exploit different sensor modalities that locate and chip out
candidate object regions. The second processing stage segments object regions, estimates their length, width,
and pose, and determines their geographic location. The third stage classifies detections into one of K
predetermined object classes (specified in a models file) plus clutter. Detections are scored based on their
salience, size/shape, and spatial-spectral properties. Detection reports can be output in a number of popular
formats including flat files, HTML web pages, and KML files for display in Google Maps or Google Earth.
Several examples illustrating the operation and performance of the IDC on Quickbird, GeoEye, and DCS
SAR imagery are presented.
A system is described for predicting the location and movement of ground vehicles over road networks using a combination of vehicle motion models, context, and network flow analysis. Preliminary results obtained over simulated ground vehicle movement scenarios demonstrate the ability to accurately predict candidate TCT locations under move-stop-move and other typical vehicle behaviors. Limitations of current models are discussed and extensions proposed.
The use of information theoretics within fusion and tracking represents an interesting addition to the problem of assessing optimal track fusion performance. This paper will explore the use of information-theoretics, namely, the use of the Kullback-Leibler as a measure of improving on the track assignment problem.
The automatic detection of significant changes in imagery is important in a number of intelligence, surveillance, and reconnaissance (ISR) tasks. An automated capability known as the Order of Battle Change Fusion (OBCF) system is described for detecting, fusing, and tracking changes over time in multi-sensor imagery. OBCF uses multiple change detection algorithms to exploit different aspects of change in multi-sensor images, normalcy models that provide a physical basis for detecting change and estimating the performance of change detection algorithms, algorithm fusion to combine the results from multiple change detection algorithms in order to enhance and maintain performance over changing operating conditions, and stationary tracking to provide a seamless history of image changes over time across different sensing modalities. Preliminary experimental results using electro-optical (EO) and synthetic aperture radar (SAR) imagery are presented.
Adaptive decision fusion represents a unique addition to the ATR community interest in Wide Area Surveillance. Isolating targets form non-targets before they reach an ATR processing algorithm can significantly reduce subsequent ATR processing burdens. As the volume of imagery increases from diverse new sensor systems, adaptive methods will be required to reduce early-stage false alarms to levels that can be handled by more computationally intensive down-stream processing. Change detection algorithms solve part of the problem by reducing false alarms, but the mapping transformation form image space to change space also induces a new set of false reports. The Adaptive Multi-Image Decision Fusion process will provide a basis for fusing and interpreting these change events and 'bundling' them together in a feature set so that they can be dealt with by a feature-based classifier. The decision level fusion will use only feature provided by the component change detection modules . This acts as the first stage of screening to determine which sensor's and which algorithm's output should be fused and adaptively determines the corresponding optimal fusion rule. A complete set of fusion rules are examined for the two- detector case for collected SAR imagery, and theoretical considerations are discussed for the three-detector case. Each rule compares the relative performance from each change detection algorithm. The system determines the quality of each report with respect to the level of clutter, and determines the representative fusion rule. Examples are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.