PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11404, including the Title Page, Copyright information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The emergence of machine learning into scientific fields has created opportunities for novel and powerful image processing techniques. Algorithms that can perform complex tasks without a “man-in-the-loop” or explicit instructions are invaluable artificial intelligence tools. These algorithms typically require a large set of training data on which to base statistical predictions. In the case of electro-optical infrared (EO/IR) remote sensing, algorithm designers often seek a substantial library of images comprising many weather conditions, times of day, sensor resolutions, etc. These images may be synthetic (predicted) or measured, but should encompass a large variety of targets imaged from a variety of vantage points against numerous backgrounds. Acquiring such a large set of measured imagery with sufficient variation can be difficult, requiring numerous field campaigns. Alternatively, accurate prediction of target signatures in cluttered outdoor scenes may be a viable option. In this work, sensor imagery is generated using CoTherm, a co-simulation tool which operates MuSES (an EO/IR simulation code) in an automated fashion to create a large library of synthetic images. The relevant MuSES inputs – which might include environmental factors, global location, date and time, vehicle engine state, human clothing and activity level, or sensor waveband – can be manipulated by a CoTherm workflow process. The output of this process is a large library of MuSES-generated EO/IR sensor radiance images suitable for algorithm development. If desired, synthetic target pixels can be inserted into measured background images for added realism.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A large variety of image quality metrics has been proposed within the last decades. The majority of these metrics has been investigated only for single image degradations like noise, blur and compression on limited sets of domain-specific images. For assessing imager performance, however, a task-specific evaluation of captured imagers with user-defined content seems, in general, more appropriate than using such metrics. This paper presents an approach to image quality assessment of camera data by comparison of classification rates of models individually trained to solve simple classification tasks on images containing single geometric primitives. Examples of considered tasks are triangle orientation discrimination or the determination of number of line pairs for bar targets. In order to make models more robust against image degradations typically occurring in real cameras, data augmentation is applied on pristine imagery of geometric primitives in the training phase. Pristine imagery is impaired by a variety of simulated image degradations, e.g. Gaussian noise, salt and pepper noise for defective pixels, Gaussian and motion blur, perspective image distortion. The trained models are then applied to real camera images and classification rates are calculated for geo- metric primitives of different sizes, contrasts and center positions. For task-related performance ranking, these classification rates could be compared for multiple cameras or camera settings. An advantage of this approach is that the amount of training data is practically inexhaustible due to artificial imagery and applied image degradations, which makes it easy to counteract model over fitting by increasing the number of considered realizations of image degradations applied to the imagery and hence increasing the variability of training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser speckle patterns typically occur when a laser beam with a narrow spectral linewidth is reflected by small-scale rough surfaces. These intensity patterns are of great interest for active imaging techniques such as gated-viewing, optical coherence tomography, or any other measurement techniques involving laser illumination. In addition to turbulence effects, surface roughness elevation plays an important role in this process. This paper presents the 2D simulation of isotropic small-scale rough surfaces with the corresponding objective speckle patterns, caused only by the reflection of laser light by those surfaces. In addition, laser speckles generated from sea surfaces, whose structures are anisotropic due to the effect of wind, are also shown. The numerical procedure for the simulation of the (material/sea) surface roughness is based on Fast Fourier Transform (FFT). Our method can simulate surfaces with given power spectral density or auto-covariance function (ACF). The most common are the Gaussian and exponential ACF’s. Thereby, the root-mean-square (rms) of surface heights and the correlation length are the main roughness descriptors for surfaces. A surface realization, using a fractal power-law for the spectral density, is also shown. For the simulation of the sea surface roughness, the main input parameters for the wave power spectrum are wind speed, wind direction and fetch. The simulation of the speckle patterns comprises the free-space propagation of a Gaussian-shaped laser beam in forward direction, the subsequent reflection at the rough surface, which introduces fluctuations in the wave phase, and the backward propagation of the reflected laser beam. The method is similar to that of the laser beam propagation in a turbulent atmosphere that uses a 2D spatial field of phase fluctuations (phase screens), whereas here, only a single 2D phase screen is considered that defines the reflective medium.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Neural network based classifiers have been shown to suffer from image perturbations in the form of 2-dimensional transformations. These transformations lack physical constraints making them less of a practical concern and more of a theoretical interest. This paper pushes to produce 3-dimensional materials to mimic these 2-dimensional image transformations by using artificial neural networks to regress material parameters. The neural networks are trained on simulation data from full-wave simulations and physics-based ray tracing simulations. Two neural network models are developed to regress material parameters of a common transformation optics solution, and a Gaussian blur, respectively. The model trained for the transformation optics solution was able to find a unique material solution whose simulated waveform generally matches an analytical solution. The model trained for the Gaussian blur was unable to find an adequate material solution for the image transformation possibly due to the constraints placed on the regression by the ray tracing simulation. Finally, a framework is proposed to combine the ray tracing and full-wave simulations to produce more accurate data, enabling a better regression of material parameters for image transformations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast signal to noise ratio (CSNR) is an important metric for a thermal camera. Listing two, it guides a system architect to design the camera for a desired performance. It predicts the performance of the camera for a given scenario. Many an author developed its formulas and used them. Holst presented them systematically in his textbook. The formulas for CSNR are not always the same one another for they are derived under different assumptions and approximations. This paper reviews some and presents improved formulas for both the point source and the extended source. They are compared to others in the literature, delineating the improvement. The formulas are applied to one scenario as an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral cameras capture images where every pixel contains spectral information of the corresponding small area of the depicted scene. Spatial misregistration – differences in spatial sampling between different spectral channels – is one of the key quality parameters of these cameras, because it may have a large impact on the accuracy of the captured spectra. Spatial misregistration unifies various factors, such as differences in the position of the optical point spread function (PSF) in different spectral channels, differences in PSF size, and differences in PSF shape. Ideally, there should be no difference in spatial sampling across the spectral channels, but in any real camera all these factors are present to some degree. This work shows the magnitude of the spectral errors caused by these spatial misregistration factors of different magnitudes and in various combinations, when acquiring hyperspectral images of real scenes. The spectral errors are calculated in Virtual Camera software, where high resolution airborne images of real-world scenes and several PSFs of different hyperspectral cameras are used as the input. The misregistration factors are simulated. Also, two different methods for quantifying spatial misregistration in the lab are tested and compared, using the correlation with the errors in the real-world scenes as the criterion. The results are used to suggest the best camera characterization approach, that would adequately predict spatial misregistration errors and allow reliable comparison of different hyperspectral cameras to each other.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detectors used in imaging systems always generate optical reflections as the light is never completely absorbed. To estimate or measure the detector optical reflections permits to better manage the induced parasitic photonic signals (ghost and scattering) in imaging devices. We describe different methods to assess these detector reflections. Among them, a powerful test based on etalon effect in Focal Plan Array is detailed for measuring the reflection at different wavelengths even near the cut-off of the detector sensitivity. The physical effects in PV junctions that can explain the observed optical reflections are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advancement of satellite imaging, aircraft, drones and more, we see the development of powerful infrared sensors, so that the human body's heat waves can be seen by the satellites.Sensing the Infrared rays of the living creatures using some recently developed and very sensitive sensors has made it easy to track the movement of human groups. By installing these sensors into photographic cameras and tuning it between 8-14 micrometer, it is even possible to obtain the exact position of the human targets from satellites.
Based on the density functional theory (DFT) implemented in SOD program (Site Occupancy Disorder) and Wien2k code, we could reach to a desirable percentage of the dopant that is prefect for absorbing infrared rays. By using this information, we have synthesized Ni-Al co-doped ZnO (NiAl:ZnO) from a metal nitrate precursor and nitric acid by a modified sole-gel combustion method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.