KEYWORDS: Sensors, Electric field sensors, Acoustics, Weapons, Firearms, Signal to noise ratio, Detection and tracking algorithms, Signal processing, Environmental sensing, Signal detection
Research and experimental trials have shown that electric-field (E-field) sensors are effective at detecting charged projectiles. E-field sensors can likely complement traditional acoustic sensors, and help provide a more robust and effective solution for bullet detection and tracking. By far, the acoustic sensor is the most prevalent technology in use today for hostile fire defeat systems due to compact size and low cost, yet they come with a number of challenges that include multipath, reverberant environments, false positives and low signal-to-noise. Studies have shown that these systems can benefit from additional sensor modalities such as E-field sensors. However, E-field sensors are a newer technology that is relatively untested beyond basic experimental trials; this technology has not been deployed in any fielded systems. The U.S. Army Research Laboratory (ARL) has conducted live-fire experiments at Aberdeen Proving Grounds (APG) to collect data from E-field sensors. Three types of E-field sensors were included in these experiments: (a) an electric potential gradiometer manufactured by Quasar Federal Systems (QFS), (b) electric charge induction, or "D-dot" sensors designed and built by the Army Research Lab (ARL), and (c) a varactor based E-field sensor prototype designed by University of North Carolina-Charlotte (UNCC). Sensors were placed in strategic locations near the bullet trajectories, and their data were recorded. We analyzed the performance of each E-field sensor type in regard to small-arms bullet detection capability. The most recent experiment in October 2013 allowed demonstration of improved versions of the varactor and D-dot sensor types. Results of new real-time analysis hardware employing detection algorithms were also tested. The algorithms were used to process the raw data streams to determine when bullet detections occurred. Performance among the sensor types and algorithm effectiveness were compared to estimates from acoustics signatures and known ground truth. Results, techniques and configurations that might work best for a given sensor platform are discussed.
Currently existing acoustic based Gunfire Detection Systems (GDS) such as soldier wearable, vehicle mounted, and
fixed site devices provide enemy detection and localization capabilities to the user. However, the solution to the problem
of portability versus performance tradeoff remains elusive. The Data Fusion Module (DFM), described herein, is a
sensor/platform agnostic software supplemental tool that addresses this tradeoff problem by leveraging existing soldier
networks to enhance GDS performance across a Tactical Combat Unit (TCU). The DFM software enhances performance
by leveraging all available acoustic GDS information across the TCU synergistically to calculate highly accurate
solutions more consistently than any individual GDS in the TCU. The networked sensor architecture provides additional
capabilities addressing the multiple shooter/fire-fight problems in addition to sniper detection/localization. The addition
of the fusion solution to the overall Size, Weight and Power & Cost (SWaP&C) is zero to negligible. At the end of the
first-year effort, the DFM integrated sensor network's performance was impressive showing improvements upwards of
50% in comparison to a single sensor solution. Further improvements are expected when the networked sensor
architecture created in this effort is fully exploited.
Limited autonomous behaviors are fast becoming a critical capability in the field of robotics as robotic applications are
used in more complicated and interactive environments. As additional sensory capabilities are added to robotic
platforms, sensor fusion to enhance and facilitate autonomous behavior becomes increasingly important. Using biology
as a model, the equivalent of a vestibular system needs to be created in order to orient the system within its environment
and allow multi-modal sensor fusion.
In mammals, the vestibular system plays a central role in physiological homeostasis and sensory information integration
(Fuller et al, Neuroscience 129 (2004) 461-471). At the level of the Superior Colliculus in the brain, there is multimodal
sensory integration across visual, auditory, somatosensory, and vestibular inputs (Wallace et al, J Neurophysiol 80
(1998) 1006-1010), with the vestibular component contributing a strong reference frame gating input. Using a simple
model for the deep layers of the Superior Colliculus, an off-the-shelf 3-axis solid state gyroscope and accelerometer was
used as the equivalent representation of the vestibular system. The acceleration and rotational measurements are used to
determine the relationship between a local reference frame of a robotic platform (an iRobot Packbot®) and the inertial
reference frame (the outside world), with the simulated vestibular input tightly coupled with the acoustic and optical
inputs. Field testing of the robotic platform using acoustics to cue optical sensors coupled through a biomimetic
vestibular model for "slew to cue" gunfire detection have shown great promise.
KEYWORDS: Digital signal processing, Analog electronics, Nerve, Neurons, Signal processing, Field programmable gate arrays, Acoustics, Mirrors, Computing systems, Electronics
We are developing low-power microcircuitry that implements classification and direction finding systems of very small
size and small acoustic aperture. Our approach was inspired by the fact that small mammals are able to localize sounds
despite their ears may be separated by as little as a centimeter. Gerbils, in particular are good low-frequency localizers,
which is a particularly difficult task, since a wavelength at 500 Hz is on the order of two feet. Given such signals, crosscorrelation-
based methods to determine direction fail badly in the presence of a small amount of noise, e.g. wind noise
and noise clutter common to almost any realistic environment. Circuits are being developed using both analog and
digital techniques, each of which process signals in fundamentally the same way the peripheral auditory system of
mammals processes sound. A filter bank represents filtering done by the cochlea. The auditory nerve is implemented
using a combination of an envelope detector, an automatic gain stage, and a unique one-bit A/D, which creates what
amounts to a neural impulse. These impulses are used to extract pitch characteristics, which we use to classify sounds
such as vehicles, small and large weaponry from AK-47s to 155mm cannon, including mortar launches and impacts. In
addition to the pitchograms, we also use neural nets for classification.
Robotics are rapidly becoming an integral tool on the battlefield and in homeland security, replacing humans in
hazardous conditions. To enhance the effectiveness of robotic assets and their interaction with human operators, smart
sensors are required to give more autonomous function to robotic platforms. Biologically inspired sensors are an
essential part of this development of autonomous behavior and can increase both capability and performance of robotic
systems.
Smart, biologically inspired acoustic sensors have the potential to extend autonomous capabilities of robotic
platforms to include sniper detection, vehicle tracking, personnel detection, and general acoustic monitoring. The key to
enabling these capabilities is biomimetic acoustic processing using a time domain processing method based on the neural
structures of the mammalian auditory system. These biologically inspired algorithms replicate the extremely adaptive
processing of the auditory system yielding high sensitivity over broad dynamic range. The algorithms provide
tremendous robustness in noisy and echoic spaces; properties necessary for autonomous function in real world acoustic
environments. These biomimetic acoustic algorithms also provide highly accurate localization of both persistent and
transient sounds over a wide frequency range, using baselines on the order of only inches.
A specialized smart sensor has been developed to interface with an iRobot Packbot® platform specifically to
enhance its autonomous behaviors in response to personnel and gunfire. The low power, highly parallel biomimetic
processor, in conjunction with a biomimetic vestibular system (discussed in the companion paper), has shown the
system's autonomous response to gunfire in complicated acoustic environments to be highly effective.
KEYWORDS: Digital signal processing, Analog electronics, Neurons, Transistors, Biomimetics, Field programmable gate arrays, Nerve, Algorithm development, Signal processing, Signal to noise ratio
Biomimetic signal processing that is functionally similar to that performed by the mammalian peripheral auditory system
consists of several stages. The concatenated stages of the system each favor differing types of hardware
implementations. Ideally, the front-end would be an implementation of the mammalian cochlea, which is a tapered
nonlinear, traveling-wave amplifier. It is not a good candidate for standard digital implementations. The AM
demodulator can be implemented using digital or analog designs. The Automatic Gain Control (AGC) stage is highly
unusual. It requires filtering and multiplication in a closed-loop configuration, with bias added at each of two
concatenated stages. Its implementation is problematic in DSP, FPGA, full custom digital VLSI, and analog VLSI. The
one-bit A/D (also called the "spiking neuron"), while simple at face value, involves a complicated triggering mechanism,
which is amenable to DSP, FPGA, and custom digital but computationally intense, and is suited to an analog VLSI
implementation.
Currently, we have several hardware embodiments of the biomimetic system. The RedOwl application occupies about
160 cubic inches in volume at the present time. A DSP approach can compute 15 channels for two ears for three A/D
categories using Analog Devices Tiger SHARC-201 DSP chips within a system size estimated to be on the order of 30
cubic inches. BioMimetic Systems, Inc., a Boston University startup company is developing an FPGA solution. Within
the university, we are also pursuing both a custom digital ASIC route and a current-mode analog ASIC.
This paper describes the flow of scientific and technological achievements beginning with a stationary "small, smart,
biomimetic acoustic processor" designed for DARPA that led to a program aimed at acoustic characterization and
direction finding for multiple, mobile platforms. ARL support and collaboration has allowed us to adapt the core
technology to multiple platforms including a Packbot robotic platform, a soldier worn platform, as well as a vehicle
platform. Each of these has varying size and power requirements, but miniaturization is an important component of the
program for creating practical systems which we address further in companion papers. We have configured the system to
detect and localize gunfire and tested system performance with live fire from numerous weapons such as the AK47, the
Dragunov, and the AR15. The ARL-sponsored work has led to connections with Natick Labs and the Future Force
Warrior program, and in addition, the work has many and obvious applications to homeland defense, police, and civilian needs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.