Unmanned aerial vehicles (UAVs) capture real-time video data of military targets while keeping the warfighter at a safe
distance. This keeps soldiers out of harm's way while they perform intelligence, surveillance and reconnaissance (ISR)
and close-air support troops in contact (CAS-TIC) situations. The military also wants to use UAV video to achieve force
multiplication. One method of achieving effective force multiplication involves fielding numerous UAVs with cameras
and having multiple videos processed simultaneously by a single operator. However, monitoring multiple video streams
is difficult for operators when the videos are of low quality. To address this challenge, we researched several promising
video enhancement algorithms that focus on improving video quality. In this paper, we discuss our video enhancement
suite and provide examples of video enhancement capabilities, focusing on stabilization, dehazing, and denoising. We
provide results that show the effects of our enhancement algorithms on target detection and tracking algorithms. These
results indicate that there is potential to assist the operator in identifying and tracking relevant targets with aided target
recognition even on difficult video, increasing the force multiplier effect of UAVs. This work also forms the basis for
human factors research into the effects of enhancement algorithms on ISR missions.
In this paper, we present our initial findings demonstrating a cost-effective approach to Aided Target Recognition (ATR)
employing a swarm of inexpensive Unmanned Aerial Vehicles (UAVs). We call our approach Distributed ATR (DATR).
Our paper describes the utility of DATR for autonomous UAV operations, provides an overview of our methods, and the
results of our initial simulation-based implementation and feasibility study. Our technology is aimed towards small and
micro UAVs where platform restrictions allow only a modest quality camera and limited on-board computational
capabilities. It is understood that an inexpensive sensor coupled with limited processing capability would be challenged
in deriving a high probability of detection (Pd) while maintaining a low probability of false alarms (Pfa). Our hypothesis
is that an evidential reasoning approach to fusing the observations of multiple UAVs observing approximately the same
scene can raise the Pd and lower the Pfa sufficiently in order to provide a cost-effective ATR capability. This capability
can lead to practical implementations of autonomous, coordinated, multi-UAV operations.
In our system, the live video feed from a UAV is processed by a lightweight real-time ATR algorithm. This algorithm
provides a set of possible classifications for each detected object over a possibility space defined by a set of exemplars.
The classifications for each frame within a short observation interval (a few seconds) are used to generate a belief
statement. Our system considers how many frames in the observation interval support each potential classification. A
definable function transforms the observational data into a belief value. The belief value, or opinion, represents the
UAV's belief that an object of the particular class exists in the area covered during the observation interval. The opinion
is submitted as evidence in an evidential reasoning system. Opinions from observations over the same spatial area will
have similar index values in the evidence cache. The evidential reasoning system combines observations of similar
spatial indexes, discounting older observations based upon a parameterized information aging function. We employ
Subjective Logic operations in the discounting and combination of opinions. The result is the consensus opinion from all
observations that an object of a given class exists in a given region.
KEYWORDS: Data modeling, Data fusion, Probability theory, Information fusion, Systems modeling, Computing systems, Computer architecture, Logic, Mathematical modeling, Prototyping
This paper presents a reasoning system that pools the judgments from a set of inference agents with information
from heterogeneous sources to generate a consensus opinion that reduces uncertainty and improves knowledge
quality. The system, called Collective Agents Interpolation Integral (CAII), addresses a high level data fusion
problem by combining, in a mathematically sound manner, multi-models of inference in knowledge intensive
multi agent architecture. Two major issues are addressed in CAII. One is the ability of the inference mechanisms
to deal with hybrid data inputs from multiple information sources and map the diverse data sets to a uniform
representation in an objective space of reasoning and integration. The other is the ability of the system
architecture to allow the continuous and discrete outputs of a diverse set of inference agents to interact, cooperate,
and integrate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.