Persistent surveillance is an intricate process requiring monitoring, gathering, processing, tracking, and
characterization of many spatiotemporal events occurring concurrently. Data associated with events can be
readily attained by networking of hard (physical) sensors. Sensors may have homogeneous or
heterogeneous (hybrid) sensing modalities with different communication bandwidth requirements.
Complimentary to hard sensors are human observers or "soft sensors" that can report occurrences of
evolving events via different communication devices (e.g., texting, cell phones, emails, instant messaging,
etc.) to the command control center. However, networking of human observers in ad-hoc way is rather a
difficult task. In this paper, we present a Twitter web-service for soft agent reporting in persistent
surveillance systems (called Web-STARS). The objective of this web-service is to aggregate multi-source
human observations in hybrid sensor networks rapidly. With availability of Twitter social network, such a
human networking concept can not only be realized for large scale persistent surveillance systems (PSS), but
also, it can be employed with proper interfaces to expedite rapid events reporting by human observers. The
proposed technique is particularly suitable for large-scale persistent surveillance systems with distributed soft
and hard sensor networks. The efficiency and effectiveness of the proposed technique is measured
experimentally by conducting several simulated persistent surveillance scenarios. It is demonstrated that by
fusion of information from hard and soft agents improves understanding of common operating picture and enhances situational awareness.
This paper presents a Multi-Layered Context Impact Modulation (MCIM) technique for persistent surveillance
systems (PSS) and discusses its layered architecture for different context modulations including: spatial,
temporal, sensor reliability, human presence, and environmental modulations. This paper also presents a
fusion model for enhancement of focus of attention at the common operation picture (COP). The fusion
model combines all the impacts from the different MCIM layers onto one unified modulated map. To test and
evaluate the performance of MCIM, several experiments were conducted to modulate interaction of humans
and vehicles which exhibit various normal and suspicious behaviors. The experimental results show strength
of this approach in correctly modulating different suspicious situations with higher degree of certainty.
This paper presents a distributed multi-modality sensor network concept for vehicle classification within perimeter of a
surveillance system. This perimeter surveillance concept represents a "Virtual RF Fence" consisting of remotely
located electro-optic surveillance cameras and a standoff range radar system. The perimeter surveillance system
vigilantly monitors the field and each time a vehicle crosses the virtual RF fence it informs the surveillance cameras to
actively monitor the activity of vehicles as it passes through the field. This paper describes the methodologies applied
for processing the EO imagery data including target vehicle segmentation from background, vehicle shadow
elimination, vehicle feature vector generation, and a neural network approach for vehicle classification. A metric is also
proposed for evaluation of performance of the vehicle classification technique.
Target tracking for network surveillance systems has gained significant interest especially in sensitive areas such as homeland
security, battlefield intelligence, and facility surveillance. Most of the current sensor network protocols do not address the need
for multi-sensor fusion-based target tracking schemes, which is crucial for the longevity of the sensor network. In this paper,
we present an efficient fusion model for target tracking in a cluster-based large sensor networks. This new scheme is inspired
by the image processing techniques by perceiving a sensor network as an energy map of sensor stimuli and applying typical
image processing techniques on this map such as: filtering, convolution, clustering, segmentation, etc to achieve high-level
perceptions and understanding of the situation. The new fusion model is called Soft Adaptive Fusion of Sensor Energies
(SAFE). SAFE performs soft fusion of the energies collected by a local region of sensors in a large-scale sensor network. This
local fusion is then transmitted by the head node to a base-station to update the common operation picture with evolving events
of interest. Simulated scenarios showed that SAFE is promising by demonstrating a significant improvement in target tracking
reliability, uncertainty, and efficiency.
The rapidly advancing hardware technology, smart sensors and sensor networks are advancing environment sensing.
One major potential of this technology is Large-Scale Surveillance Systems (LS3) especially for, homeland security,
battlefield intelligence, facility guarding and other civilian applications. The efficient and effective deployment of LS3
requires addressing number of aspects impacting the scalability of such systems. The scalability factors are related to:
computation and memory utilization efficiency, communication bandwidth utilization, network topology (e.g.,
centralized, ad-hoc, hierarchical or hybrid), network communication protocol and data routing schemes; and local and
global data/information fusion scheme for situational awareness. Although, many models have been proposed to address
one aspect or another of these issues but, few have addressed the need for a multi-modality multi-agent data/information
fusion that has characteristics satisfying the requirements of current and future intelligent sensors and sensor networks.
In this paper, we have presented a novel scalable fusion engine for multi-modality multi-agent information fusion for
LS3. The new fusion engine is based on a concept we call: Energy Logic. Experimental results of this work as compared
to a Fuzzy logic model strongly supported the validity of the new model and inspired future directions for different
levels of fusion and different applications.
An intelligent robotic system can be distinguished from other machines by its ability to sense, learn, and react to its
environment despite various task uncertainties. One of the most powerful sensing modality for robotic system is vision
as it enables the robot to see its environment, recognize objects around it and interact with objects to accomplish its task.
This paper discusses vision enabling techniques that allows a robot to detect, characterize, classify, and discriminate
UneXploded Ordnance (UXO) from clutters in unstructured environments. A soft-computing approach is proposed and
validated via indoor and outdoor experiments to measure its performance efficiency and effectiveness in correctly
detection and classifying UXO vs. XO and other clutter. The proposed technique has many potential applications for
military, homeland security, law enforcement, and in particular, environment UXO remediation and clean-up operations.
KEYWORDS: RGB color model, Cameras, Calibration, Optical tracking, Intelligence systems, 3D modeling, Video surveillance, Image segmentation, Data fusion, Motion models
In this paper, we have presented two approaches addressing visual target tracking and localization in complex urban environment. The two techniques presented in this paper are: fusion-based multi-target visual tracking, and multi-target localization via camera calibration. For multi-target tracking, the data fusion concepts of hypothesis generation/evaluation/selection, target-to-target registration, and association are employed. An association matrix is implemented using RGB histograms for associated tracking of multi-targets of interests. Motion segmentation of targets of interest (TOI) from the background was achieved by a Gaussian Mixture Model. Foreground segmentation, on other hand, was achieved by the Connected Components Analysis (CCA) technique. The tracking of individual targets was estimated by fusing two sources of information, the centroid with the spatial gating, and the RGB histogram association matrix. The localization problem is addressed through an effective camera calibration technique using edge modeling for grid mapping (EMGM). A two-stage image pixel to world coordinates mapping technique is introduced that performs coarse and fine location estimation of moving TOIs. In coarse estimation, an approximate neighborhood of the target position is estimated based on nearest 4-neighbor method, and in fine estimation, we use Euclidean interpolation to localize the position within the estimated four neighbors. Both techniques were tested and shown reliable results for tracking and localization of Targets of interests in complex urban environment.
Intelligent surveillance systems (ISS) have gained a significant attention in recent years due to the nationwide
security concerns. Some of the important applications of ISS include: homeland security, border
monitoring, battlefield intelligence, and sensitive facility monitoring. The essential requirements of an ISS
include: (1) multi-modality multi-sensor data and information fusion, (2) communication networking, (3)
distributed data/information processing,(4) Automatic target recognition and tracking, (5) Scenario profiling
from discrete correlated/uncorrelated events, (6) Context-based situation reasoning, and (7) Collaborative
resource sharing and decision support systems. In this paper we have addressed the problem of humanposture
classification in crowded urban terrain environments. Certain range of human postures can be
attributed to different suspicious acts of intruders in a constrained environment. By proper time analysis of
human trespassers' postures in an environment, it would be possible to identify and differentiate malicious
intention of the trespassers from other normal human behaviors. Specifically in this paper, we have proposed
an image processing-based approach for characterization of five different human postures including:
standing, bending, crawling, carrying a heavy object, and holding a long object. Two approaches were
introduced to address the problem: template-matching and Hamming Adaptive Neural Network (HANN)
classifiers. The former approach performs human posture characterization via binary-profile projection and
applies a correlation-based method for classification of human postures. The latter approach is based a
HANN technique. For training of the neural, the posture-patterns are initially compressed, thresholded, and
serialized. The binary posture-pattern arrays were then used for training of the HANN. The comparative
performance evaluation of both approaches the same set of training and testing examples were used to
measure their effectiveness in classifying of five classes of human posture patterns. This paper presents and
discusses the results of this experimental work. Both approaches demonstrated very promising results.
There are many advantages of using acoustic sensor arrays to perform targets of interest identification and classification in the battlefield. They are low cost and have relatively low power consumption. They require no line of sight and provide many capabilities for target detection, bearing estimation, target tracking, classification and identification. Furthermore, they can provide cueing for other sensors and multiple acoustic sensors responses can be combined and triangulated to localize an energy source target in the field. In practice, however, many environment noise, time-varying, and uncertainties factors affect their performance in detecting targets of interest reliably and accurate. In this paper, we have proposed a novel feature extraction approach for robust classification and identification of moving target vehicles to reduce those factors. The approach is based on Low Rank Decomposition based Lp norm. Using Low Rank Decomposition based L1 norm where p = 1, dominant features of vehicle acoustic signatures can be extracted appropriately with respect to vehicle operational responses and used for robust identification and classification of target vehicles. The performance of the proposed approach has been evaluated based on a set of experimental acoustic data from multiple vehicle test-runs. It is demonstrated that the approach yields significant improvement results over our earlier vehicle classification technique based on Singular Value Decomposition (SVD) and reduces uncertainties associated with classification of target vehicles based on acoustic signatures at different operation speeds in the field.
Considerable interest has arisen in the recent years utilizing inexpensive acoustic sensors in the battlefield to perform targets of interest identification and classification. There are many advantages of using acoustic sensor arrays. They are low cost, and relatively have low power consumption. They require no line of sight and provide many capabilities for target detection, bearing estimation, target tracking, classification and identification. Furthermore, they can provide cueing for other sensors and multiple acoustic sensor responses can be combined and triangulated to localize an energy source target in the field. In practice, however, many environment noise factors affect their performance in detecting targets of interest reliably and accurate. In this paper, we have proposed a novel approach for detection, classification, and identification of moving target vehicles. The approach is based on Singular Value Decomposition (SVD) coupled with Particle Filtering (PF) technique. Using SVD dominant features of vehicle acoustic signatures are extracted efficiently. Then, these feature vectors are employed for robust identification and classification of target vehicles based on a particle filtering scheme. The performance of the proposed approach was evaluated based on a set of experimental acoustic data from multiple vehicle test-runs. It is demonstrated that the approach yields very promising results where an array of acoustic sensors are used to detect, identify and classify target vehicles in the field.
In this paper, we have addressed the problem of visual inspection, recognition, and discrimination of UXO based on
computer vision techniques and introduced three complimentary color, texture, and shape classifiers. The proposed
technique initially enhances an image taken from an UXO site and removes terrain background. Next, it applies a blob
detector to detect the salient objects of the environment. The UXO classification begins with a perceptive color
classifier that classifies the found salient objects based on their color hues. The color classifier attempts to differentiate
and classify the color of salient objects based on the color hue information of some known UXO objects in the database.
A color ranking scheme is applied for ranking color hue likelihood of the salient objects in the environment. Next, an
intuitive texture classifier is applied to characterize the surface texture of the salient objects. The texture signature is
used to disjointedly discriminate objects whose surface texture properties matching the priori known UXO textures.
Lasting, an intuitive Object Shape Classifier is applied to independently arbitrate the classification of the UXO. Three
soft computing methods were developed for robust decision fusion of three UXO feature classifiers. These soft
computing techniques include: a statistical-based genetic algorithm, a hamming neural network, and a fuzzy logic
algorithm. In this paper, we present details of the UXO feature classifiers and discuss the performance of three decision
fusion methods for fusion of results from the three UXO feature classifiers. The main contributing factor of this work is
toward designing an ultimate fully-automated tele-robotic system for UXO classification and decontamination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.