Atmospheric fogs create degraded visual environments, making it difficult to recover optical information from our surroundings. We have developed a low-SWaP technique which characterizes these environments using an f-theta lens to capture the angular scattering profile of a pencil beam passed through a fog. These measurements are then compared to data taken in tandem by conventional characterization techniques (optical transmission, bulk scattering coefficient, etc.). We present this angular scattering measurement as a low-SWaP alternative to current degraded visual environment characterization techniques to provide real-time data for implementation with signal recovery algorithms.
Atmospheric fog is a common degraded visual environment (DVE) that reduces sensing and imaging range and resolution in complex ways not fully encapsulated by traditional metrics. As such, better physical models are required to describe imaging systems in a fog environment. We have developed a tabletop fog chamber capable of creating repeatable fog-like environments for controlled experimentation of optical systems within this common DVE. We present measurement of transmission coefficients and droplet size distribution in a multiple scattering regime using this chamber.
Degraded visual environments like fog pose a major challenge to safety and security because light is scattered by tiny particles. We show that by interpreting the scattered light it is possible to detect, localize, and characterize objects normally hidden in fog. First, a computationally efficient light transport model is presented that accounts for the light reflected and blocked by an opaque object. Then, statistical detection is demonstrated for a specified false alarm rate using the Neyman-Pearson lemma. Finally, object localization and characterization are implemented using the maximum likelihood estimate. These capabilities are being tested at the Sandia National Laboratory Fog Chamber Facility.
Identification of vegetation species and type is important in many chemical, biological, radiological, nuclear, and explosive sensing applications. For instance, emergence of non-climax species in an area may be indicative of anthropogenic activity which can complement prompt signatures for underground nuclear explosion detection and localization. To explore signatures of underground nuclear explosions, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of 274 visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. Previous work has shown that a vegetation spectral derivative can be more indicative of species than the measured value of each band. However, applying a spectral derivative amplifies any noise in the spectrum and reduces the benefit of the derivative analysis. Fitting the spectra with a polynomial can provide the slope information (derivative) without amplifying noise. In this work, we simultaneously capture slope and curvature information and reduce the dimensionality of remotely sensed hyperspectral imaging data. This is performed by employing a 2nd order polynomial fit across spectral bands of interest. We then compare the classification accuracy of a support vector machine classifier fit to the polynomial dimensionality reduction technique and the same support vector machine fit to the same number of components from principle component analysis.
This communication reports progress towards the development of computational sensing and imaging methods that utilize highly scattered light to extract information at greater depths in degraded visual environments like fog for improved situational awareness. As light propagates through fog, information is lost due to random scattering and absorption by micrometer sized water droplets. Computational diffuse optical imaging shows promise for interpreting the detected scattered light, enabling greater depth penetration than current methods. Developing this capability requires verification and validation of diffusion models of light propagation in fog. We report models that were developed and compared to experimental data captured at the Sandia National Laboratory Fog Chamber facility. The diffusion approximation to the radiative transfer equation was found to predict light propagation in fog under the appropriate conditions.
Degraded visual environments are a cause of problems for surveillance systems and other sensors due to the reduction in contrast, range, and signal. Fog is a concern because of the frequency of its formation along our coastlines; disrupting border security and surveillance. Sandia has created a Fog Facility for the characterization and testing of optical and other systems. We will present a comparison of our generated fogs to the measured naturally occurring fogs reported in the literature and an overview of Sandia’s work using this facility to investigate ways to enhance perception through degraded visual environments.
There are several factors that should be considered for robust terrain classification. We address the issue of high pixel-wise variability within terrain classes from remote sensing modalities, when the spatial resolution is less than one meter. Our proposed method segments an image into superpixels, makes terrain classification decisions on the pixels within each superpixel using the probabilistic feature fusion (PFF) classifier, then makes a superpixel-level terrain classification decision by the majority vote of the pixels within the superpixel. We show that this method leads to improved terrain classification decisions. We demonstrate our method on optical, hyperspectral, and polarimetric synthetic aperture radar data.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers. However, machine learning classification algorithms do not require the same data representation used by humans. We investigate the compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and trade-offs of these systems built for compressed classification of the Modified National Institute of Standards and Technology dataset. Both architectures achieve classification accuracies within 3% of the optimized sensing matrix for compression ranging from 98.85% to 99.87%. The performance of the systems with 98.85% compression were between an F / 2 and F / 4 imaging system in the presence of noise.
We investigate the feasibility of additively manufacturing optical components to accomplish task-specific classification in a computational imaging device. We report on the design, fabrication, and characterization of a non-traditional optical element that physically realizes an extremely compressed, optimized sensing matrix. The compression is achieved by designing an optical element that only samples the regions of object space most relevant to the classification algorithms, as determined by machine learning algorithms. The design process for the proposed optical element converts the optimal sensing matrix to a refractive surface composed of a minimized set of non-repeating, unique prisms. The optical elements are 3D printed using a Nanoscribe, which uses two-photon polymerization for high-precision printing. We describe the design of several computational imaging prototype elements. We characterize these components, including surface topography, surface roughness, and angle of prism facets of the as-fabricated elements.
The detection, location, and identification of suspected underground nuclear explosions (UNEs) are global security priorities that rely on integrated analysis of multiple data modalities for uncertainty reduction in event analysis. Vegetation disturbances may provide complementary signatures that can confirm or build on the observables produced by prompt sensing techniques such as seismic or radionuclide monitoring networks. For instance, the emergence of non-native species in an area may be indicative of anthropogenic activity or changes in vegetation health may reflect changes in the site conditions resulting from an underground explosion. Previously, we collected high spatial resolution (10 cm) hyperspectral data from an unmanned aerial system at a legacy underground nuclear explosion test site and its surrounds. These data consist of visible and near-infrared wavebands over 4.3 km2 of high desert terrain along with high spatial resolution (2.5 cm) RGB context imagery. In this work, we employ various spectral detection and classification algorithms to identify and map vegetation species in an area of interest containing the legacy test site. We employed a frequentist framework for fusing multiple spectral detections across various reference spectra captured at different times and sampled from multiple locations. The spatial distribution of vegetation species is compared to the location of the underground nuclear explosion. We find a difference in species abundance within a 130 m radius of the center of the test site.
This paper attempts to quantify thermal infrared (both longwave and midwave), shortwave infrared, and visible-light sensor performance under different test-chamber fogs. We find that the performance of LWIR imaging is impacted significantly less by light-to-moderate fog than the other two IR sensors and the visible imager. The paper recommends additional fog chamber tests that will be useful for the development of imaging simulation capability that accurately models fog across these wavebands.
Fog is a commonly occurring degraded visual environment which disrupts air traffic, ground traffic, and security imaging systems. For many application of interest, spatial resolution is required to identify elements of the scene. However, studying the effects of fog on resolution degradation is difficult because the composition of naturally occurring fogs is variable, and data collection is reliant on changing weather conditions. For our study, we used the Sandia National Laboratories fog facility to generate repeatable characterized fog conditions. Sandia’s well characterized fog generation allowed us to relate the resolution degradation of active and passive long-wave infrared (LWIR) imagers to the properties of fog. Additionally, the fogs we generated were denser than naturally occurring fogs. This allowed for testing of long range imaging in the shorter optical pathlengths obtainable in a laboratory environment.
In this presentation, we experimentally investigate the resolution degradation of LWIR wavelengths in realistic fog droplet sizes. Transmission of LWIR wavelengths has been studied extensively in literature. To date however, there are few experimental results quantifying the resolution degradation for LWIR imagery in fog. We present experimental results on resolution degradation for both passive and active LWIR systems. The degradation of passive imaging was measured using 37˚C blackbody with a slant edge resolution targets. The active imaging resolution degradation was measured using a polarized CO2 laser reflecting off a set of bar targets. We found that the relationship between meteorological optical range and resolution degradation was more complicated than described purely by attenuation.
Heavy fogs and other highly scattering environments pose a challenge for many commercial and national security sensing systems. Current autonomous systems rely on a range of optical sensors for guidance and remote sensing that can be degraded by highly scattering environments. In our previous and on-going simulation work, we have shown polarized light can increase signal or range through a scattering environment such as fog. Specifically, we have shown circularly polarized light maintains its polarized signal through a larger number of scattering events and thus range, better than linearly polarized light. In this work we present design and testing results of active polarization imagers at short-wave infrared and visible wavelengths. We explore multiple polarimetric configurations for the imager, focusing on linear and circular polarization states. Testing of the imager was performed in the Sandia Fog Facility. The Sandia Fog Facility is a 180 ft. by 10 ft. chamber that can create fog-like conditions for optical testing. This facility offers a repeatable fog scattering environment ideally suited to test the imager’s performance in fog conditions. We show that circular polarized imagers can penetrate fog better than linear polarized imagers.
Many optical systems are used for specific tasks such as classification. Of these systems, the majority are designed to maximize image quality for human observers; however, machine learning classification algorithms do not require the same data representation used by humans. In this work we investigate compressive optical systems optimized for a specific machine sensing task. Two compressive optical architectures are examined: an array of prisms and neutral density filters where each prism and neutral density filter pair realizes one datum from an optimized compressive sensing matrix, and another architecture using conventional optics to image the aperture onto the detector, a prism array to divide the aperture, and a pixelated attenuation mask in the intermediate image plane. We discuss the design, simulation, and tradeoffs of these compressive imaging systems built for compressed classification of the MNSIT data set. To evaluate the tradeoffs of the two architectures, we present radiometric and raytrace models for each system. Additionally, we investigate the impact of system aberrations on classification accuracy of the system. We compare the performance of these systems over a range of compression. Classification performance, radiometric throughput, and optical design manufacturability are discussed.
The scattering of light in fog is a complex problem that affects imaging in many ways. Typically, imaging device performance in fog is attributed solely to reduced visibility measured as light extinction from scattering events. We present a quantitative analysis of resolution degradation in the long-wave infrared regime. Our analysis is based on the calculation of the modulation transfer function from the edge response of a slant edge blackbody target in known fog conditions. We show higher spatial frequencies attenuate more than low spatial frequencies with increasing fog thickness. These results demonstrate that image blurring, in addition to extinction, contributes to degraded performance of imaging devices in fog environments.
Compact snapshot imaging polarimeters have been demonstrated in literature to provide Stokes parameter estimations for spatially varying scenes using polarization gratings. However, the demonstrated system does not employ aggressive modulation frequencies to take full advantage of the bandwidth available to the focal plane array. A snapshot imaging Stokes polarimeter is described and demonstrated through results. The simulation studies the challenges of using a maximum bandwidth configuration for a snapshot polarization grating based polarimeter, such as the fringe contrast attenuation that results from higher modulation frequencies. Similar simulation results are generated and compared for a microgrid polarimeter. Microgrid polarimeters are instruments where pixelated polarizers are superimposed onto a focal plan array, and this is another type of spatially modulated polarimeter, and the most common design uses a 2x2 super pixel of polarizers which maximally uses the available bandwidth of the focal plane array.
Ground-based, low-cost, uncooled infrared imagers are specially calibrated and deployed for long-term measurements of spatial and temporal cloud statistics. Measurements of cloud optical depth are shown for thin clouds, and validated with a dual-polarization cloud lidar. Good comparisons are achieved for thin clouds having 550-nm optical depth of 3 or less.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.