Computer vision has become crucial to autonomous systems, helping them navigate complex environments. Combining this with geospatial data further provides capability to geolocate the system when GPS is not available or trusted. A test bed was built to characterize the visibility of radio and cellular towers from a ground-vehicle across all atmospheric transmission bands. These targets are exemplary features because of their visibility over long distances and surveyed geolocation. Contrast measurements of targets were characterized and compared in each spectral window under different environmental conditions. Utilizing human perception to build NVIPM models provided predictable range performance for each band.
Object detection, a critical task in computer vision, has been revolutionized by Deep Learning technologies, especially convolutional neural networks (CNN). These techniques are increasingly deployed in infrared imaging systems for long-range target detection, localization, and identification. Its performance is highly dependent on the training procedure, network architecture and computing resources. In contrast, human-in-the-loop task performance can be reliably predicted using well-established models. Here we model the performance of a CNN developed for MWIR and LWIR sensors and compare against human perception models. We focus on tower detection relevant to vision-based geolocation tasks which present novel high-aspect ratio, unresolved and low-clutter scenarios.
KEYWORDS: Sensors, Cameras, Global Positioning System, Long wavelength infrared, Panoramic photography, Short wave infrared radiation, Mid-IR, Near infrared, MATLAB, Algorithm development
We built a multispectral data collection system and vehicle testbed for experimentation on vision-based geolocation. The data collection system includes a gimballed mount with VIS, NIR, SWIR, MWIR and LWIR sensors allowing us to compare simultaneous imagery of features or targets across all the atmospheric bands. For geolocation experiments, the testbed is equipped with a dual-GPS inertial measurement unit for true location and orientation, a CAN bus interface to pull vehicle speedometer and odometer data. It is also outfitted with a dual-GPU, rugged, edge computer used to control the system and collect data; the computer is pre-loaded with geospatial data (maps, tower positions, elevation data, etc.) necessary for tracking targets of interest or performing real-time geolocation estimates. 60% of the rear seat was replaced with an electronics rack which also houses a 3kW inverter providing power to all of the equipment. A weather-proof cable pass through was installed in the roof of the truck while a weather-proof enclosure provides wind and rain protection to the roof-mounted equipment. We present multispectral panoramic imagery of flat environments where cellular towers provide ideal references for geolocation and mountainous environments where the landscape and horizon topography provide viable geo-references. We will present an overview of the data collection modes, calibration procedures, and the driving data sets collected to date.
The ability to reliably and accurately ascertain a vehicle’s position is imperative for military operations as well as civilian and commercial navigation systems. Due to the susceptibility of GPS signals to RF spoofing and jamming, alternative means of vehicle self-localization are garnering substantial interest. Vision-based methods are among the most promising in environments with sufficiently distinguishable features such as towers, high-rise structures, and prominent identifiable topographical features. Here, we present a localization approach exploiting multiple spectral bands to identify key prominent scene features and determine vehicle position relative to those features to calculate a global vehicle position and heading. We employ geometric dead-reckoning using visible and LWIR imagery to quantify positional accuracy that is achievable with these bands. We utilize image recognition algorithms to identify features and map these into useful parameters for position extraction, leveraging geospatial data when possible.
The time-limited search model was developed for military operations for evaluating human search performance as a function of time, originally using static imagery but later expanded to accommodate moving sensor situations. Previously, we introduced an application for using this moving sensor search model to optimize a forward-facing sensor look-down angle for a given forward vehicle speed. In this work, we build on the optimization model to accommodate sensors that may be pointed in any direction, using coordinate transforms. This allows us to determine probability of detection for a given target as a function of a more generalized camera pointing direction. While this methodology may be applied for any target of interest such as road potholes, tanks, or IEDs, here we determine probability of detection of a Burmese python against a grass background.
Compact, high resolution and wide-angle infrared imaging systems are the paragons for security and situational awareness applications. Coherent fiber bundles (CFB) provide a platform to relay curved images, formed by optical systems such as a monocentric lens, to flat focal plane arrays so that image flattening is decoupled from image formation. This enables hundredfold increase in resolution per unit volume for wide angle lenses. We present hybrid chalcogenide/polymer CFB for mid-wave infrared (MWIR) image relay. We design the lens system and the AR coating for the bundle, and finally demonstrate a stack-and-draw process to yield 10μm pitch CFBs with about 8,000 cores.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.