PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12533, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is growing interest in applying machine learning algorithms to target acquisition tasks. Current algorithm studies suggest pixels on target area (POT) of 50-70, 150, 250, and 350 are adequate for detection, classification, recognition, and identification of tank-sized targets respectively. Using simple analyzes, we compare POT to Night Vision Integrated Performance Model (NV-IPM) range probability predictions for a typical LWIR sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The US Naval Research Laboratory (NRL) has recently developed an efficient modeling and simulation (M&S) capability to support naval surface warfare applications against a variety of EOIR sensing threats in the context of a tactical decision aid architecture. Starting with ship/ target signature, background sea clutter, and atmospheric transmission inputs obtained from high fidelity models such as ShipIR/NTCS and MODTRAN, combined with an Army CCDC RTID sensor performance metric, NRL used a novel methodology based on machine learning (ML) neural networks (NNs) to reduce large amounts of target/ environment/ sensor parameter data into an efficient network lookup table to predict target detectability. The model is currently valid for a few types of naval targets, in open ocean backgrounds as well as limited littoral scenarios for the VNIR (0.4-1 μm) and IR (3-5 and 8-12 μm) spectral regions. By using ML and NNs, the computational runtimes are short and efficient. This paper will discuss the methodology and show preliminary results produced in an integrated tactical decision aid software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target detection is a key component of any constructive simulation. Detecting the target is the first step in the beginning of the constructive simulation. Detection predictions can have significant effect on the results in any constructive simulation. This paper explores the utility of advances in machine learning to inform sensor design by analyzing the outcomes of the time and detection models in constructive simulations. A simple scenario will be designed, and execution of the scenario will be performed to create datasets for analysis. Traditional metrics such as probability of detection and time to detect will be evaluated by the algorithms to determine the optimal sensor(s) designs to achieve the best possible performance across the scenario. A summary of the results and recommendations for machine learning algorithm design for this type of data analysis will then be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Object detection, a critical task in computer vision, has been revolutionized by Deep Learning technologies, especially convolutional neural networks (CNN). These techniques are increasingly deployed in infrared imaging systems for long-range target detection, localization, and identification. Its performance is highly dependent on the training procedure, network architecture and computing resources. In contrast, human-in-the-loop task performance can be reliably predicted using well-established models. Here we model the performance of a CNN developed for MWIR and LWIR sensors and compare against human perception models. We focus on tower detection relevant to vision-based geolocation tasks which present novel high-aspect ratio, unresolved and low-clutter scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Specifications for microbolometer defective detector pixel outages, cluster sizes, and row/column outages are common for many electro-optical imaging programs. These specifications for bad pixels and clusters often do not take into account the user’s ability to perceive the lack of information from areas of a focal plane with outages that are replaced using substitution algorithms. This is because the defective pixels are typically specified as a sensor parameter and does not take into account a camera’s system level descriptors: modulation transfer function (MTF), outage substitution strategy, post processing MTF, display performance, and observer’s psychophysical performance. These parameters combine to determine the total system MTF, which can be used to determine the minimum resolution at which a replaced pixel or cluster can be observed. This study analyzes different defective pixel specifications and their visibility from the system level descriptors and proposes specifications that are better aligned to camera performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Night Vision Integrated Performance Model (NVIPM) is a versatile tool used in conducting trade studies of EOIR imaging system performance. Although its primary application is the calculation of human performance metrics, the linear systems model and standard imaging chain provides a convenient method of modeling and validating laboratory measurements. In this correspondence, we discuss how to model signal to noise measurements for both emissive and reflective band cameras within NVIPM. We provide convenient components that can be used to model the calibrated detectors necessary in most laboratory setups. Additionally, we present an update to the measured system component in order to output in digital numbers (DN), matching the default units available to the experimentalist. Using this measured system component and the methods developed for modeling the experiment with reference detectors, we show how NVIPM is an ideal tool for demonstrating measurement and laboratory agreement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Daytime low light conditions such as overcast, dawn, and dusk pose a challenge for object discrimination in the reflective bands, where the majority of illumination comes from reflected solar light. In reduced illumination conditions, sensor signal-to-noise ratio can suffer, inhibiting range performance for recognizing and identifying objects of interest. This performance reduction is more apparent in the longer wavelengths where there is less solar light. Range performance models show a strong dependence on cloud type, thickness, and time of day across all wavebands. Through an experimental and theoretical analysis of a passive sensitivity and resolution matched testbed, we compare Vis (0.4-0.7μm), NIR (0.7-1μm), SWIR (1-1.7μm), and eSWIR (2-2.5μm) to assess the limiting cases in which reduced illumination inhibits range performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single Photon Avalanche Diodes (SPAD) have shown great promise for use in Lidar and low light applications. Although staring arrays were initially developed for medical applications, recent Lidar sensor solution demands have fueled the development of large count staring sensors with quantum efficiencies extending in the NIR/SWIR and with exotic readout circuits. The same technology also enables low light systems with sensitivity below conventional CMOS. As the name implies, SPAD detectors are sensitive to single photons, behave as stochastic devices, and require special treatment for signal interpretation. In this paper, we describe a signal and noise model for both active and passive SPAD based imaging systems that includes the generation of readout events based on the SPAD detector stochastic model. The model presented here allows the evaluation of SPAD based systems under specific illumination conditions and enables the evaluation of SPAD and sensor parameter system sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a GPS-denied environment, distinct structures such as cell towers and transmission towers are useful as an aid to vision-based navigation. Cell towers are surveyed such that their locations are well-known, and the imagery of these towers can be compared to imagery databases to assist in navigation. In this research, imagery of the cell towers was taken in the VIS, SWIR, and LWIR bands with both clear sky and portions of the ground in the background. The contrast of the cell towers in the two reflective bands was determined against the sky and the ground in terms of equivalent reflectivity. The contrast of the cell towers was also determined in the LWIR in terms of equivalent blackbody temperature. The analysis of contrast is provided, the results are discussed, and recommendations on band use is provided for use in 3D map image comparisons.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous improvements have been made to various sub-models in the NATO-standard and USN accredited naval ship infrared signature model (ShipIR) since its last validation in SPIE using ShipIR (v3.2). These include upgrades to MODTRAN5 and MODTRAN6 for the sun, sky, and atmosphere models, and various fixes and improvements to the sea model: corrections to the Fresnel sea reflectance formula (v3.3a), empirical 2nd-order hiding (v3.4), and a recent fix to the sea surface roughness slope distribution (v4.2). This paper will revisit the previous experimental results, used to validate ShipIR (v3.2), first comparing these results against the steady-state version of the thermal solver in ShipIR (v4.2) and complementing these with the transient thermal solver introduced in ShipIR (v4.0).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Numerous studies have been conducted over the past decade to adequately sample the large amount of measured marine climate data for input to the design of a new naval surface combatant. Some of these involve the direct use of actual measured data (Vaitekunas and Kim, 2013) while others have reduced the complexity of the problem by focussing on the highly correlated data (e.g., air-sea temperature difference) while assuming the low to medium correlated data are simply uncorrelated (Cho, 2017). This paper will compare the two methods for a large data set off the Korean peninsula, spanning 5 years and 17 buoy locations. A follow-on analysis will compare the sensitivity of IR signature and IR susceptibility of a candidate ship (unclassified ShipIR model of a DDG class) to the variation in size, number of locations, and time span of the marine data being sampled.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing (including histogram equalization, local area processing, and edge sharpening) is a key component of practical electro-optical imaging systems. Despite this, the range performance impact of such processing remains difficult to quantify, short of running a full human perception experiment. The primary difficulty is that current analytic range performance models—best exemplified by the Targeting Task Performance (TTP) model—can only account for linear and shift-invariant (LSI) image effects. We present our efforts towards developing a quantitative, image-based range performance metric that does not require LSI assumptions. Our proposed metric is based on a Triangle Orientation Discrimination (TOD) target set and observer task, with automatic scoring accomplished through a simple template correlator. The approach is compatible with both synthetic and real imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A priori estimation of the expected achievable quality for an uncrewed aerial vehicle (UAV) based imaging system can help validate the choice of components for the system’s implementation. For uncrewed airborne imaging systems coupling the sensor to the UAV platform is relatively simple. Quantifying the expected quality of collected data can, on the other hand, be less clear and often require trial and error. The central problem for these platforms is blur. The blur produced by the various rotational modalities of the aircraft can range from overwhelming to trivial but in most cases can be mitigated. This leaves the combination of the aircraft’s linear motion, its altitude and the imaging device’s instantaneous field of view (IFOV) and integration time as the determining factors for the blur produced in the image. In addition, there are significant differences in speeds obtainable between multi-rotor and fixed wing UAVs. In this paper we develop mathematical models for predicting blur based on these factors. We then compare these models with field data obtained from cameras mounted to fixed wing and multi-rotor UAVs. Conclusions regarding camera characteristics best suited for both types of UAV as well as the best image acquisition parameters such as altitude and speed, are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new unit of irradiance which is geared towards the recent class of low-light silicon CMOS imaging detectors based on high rho, deeply depleted bulk material with significantly increased NIR responsivity; devices intended as potential substitutes for image intensifier tubes in night vision devices. This unit is called silux, a portmanteau of silicon and lux. The authors previously created a unit of irradiance called swux, a portmanteau of SWIR (shortwave infrared) and lux. Swux, which is currently in use in industry, is spectrally weighted by the response of lattice-matched indium gallium arsenide. A silux meter will act like a single giant CMOS imaging sensor (CIS) camera pixel. A standard silux unit and optometers calibrated in silux units will enable engineers and technicians to measure ambient lighting conditions in an unambiguous manner, enabling the comparison of performance for different cameras under different lighting conditions, since the silux unit, combined with a knowledge of the specifications of the optics and the sensor, can be used to directly predict the number of photoelectrons/second/pixel. The silux spectral response function Silux(λ) ranges from 350nm to 1100nm. The authors present the implementation of a silux meter based on a novel silux shaping filter design used in combination with available high-sensitivity Si-diode irradiance sensors to yield the silux spectral response function, thus enabling easy deployment of a standard silux measurement setup in every lab. First measurements from two implemented silux meters will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro-Optical (EO) system performance prediction requires physically validated system models, atmosphere/clutter models, and target signature models. These models in general include radiometric equations and measured data. The main model of such a performance prediction simulation is EO model. EO model’s subsystem models are optical system and detector system models. One approach for EO modeling is parametric model approach which is constructed via solving radiometric equations by using optics and detector model parameters. The other approach is to use the measured system data in order to simulate the systems. The measurement includes system temporal/spatial noise, intensity response, and spatial response. In this paper, these two methods are compared via measurements collected with a generic electro-optical camera. The results show that the measured data methodology is reliable for a system simulation, whereas parametric model approach can be reliable provided that the parameters are correctly defined in their physical boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro-optics system engineers often require the 2D optical point spread function (PSF) to predict a sensor’s system level performance. The 2D PSF is used to compute metrics like detection range and signal to noise (SNR) at the target, or to develop a matched filter algorithm to improve detection performance. Lens designers generate the 2D optical PSF using optical design tools like CODE-V or Zemax. If the systems engineer needs the PSF data at different field angles, the lens designer has to generate those results again resulting in a cumbersome interaction between these two disciplines. The problem gets exacerbated when going from narrow field of view (NFOV) optics to wide field of view (WFOV) optics where there can be significant differences in the PSF between the on-axis and the off-axis case. This is particularly relevant in the cases of WFOV threat warning and situational awareness infrared sensors where the on-axis Airy-disk model of the PSF is no longer valid for the large off axis angles experienced in those sensors. This paper will present the Zernike Math Model (ZMM) approach to generate 2D optics PSF outside the lens design platform and move the analysis to a more common scientific/engineering tool that systems engineers use. Once an optical design is completed by the lens designer, its performance can be represented by a unique set of Zernike polynomial coefficients. The ZMM approach will only require the lens designer to provide this set of coefficients once, and the system engineer can then use these data to model the optical PSF performance on and off-axis, using standard engineering analysis tools such as MATLAB or Mathcad. Once the optical PSF is known, it can be transformed into the modulation transfer function (MTF) form and used to model sensor performance. Our approach simplifies the interaction between systems engineering and optical engineering disciplines in helping translate optical performance into system level sensor performance modeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Small Unmanned Aerial Systems (sUAS) provide a versatile platform for covering large areas quickly. By adding sensors to these drones, imagery of large areas can be taken for a variety of applications. Traditionally, a fixed staring system or a gimballed sensor is used to take this imagery. Both options require a compromise between field of view (FOV), resolution, scanning speed and flight path to properly perform the task at hand. With more than one type of sensing, additional information can be collected about imaged environments. If more than one sensor is integrated onto the drone, a wide FOV can still be covered without a scanning gimbal and with higher resolution than a traditional wide FOV system. Presented is a multi-camera, multi-wavelength design approach based on a constraining ground sample distance (GSD) for a wide area coverage (WAC) system. A figure of merit (FoM) is created to quantify and compare the performance of the WAC systems in the visible (0.4-0.7um), short wave infrared (1.0-1.7um) and longwave infrared (8-14um) in both good and bad visibility conditions. The performance of three optimized and fabricated WAC systems are compared and tested. The testing results of the flown fabricated systems show that the design approach described delivers the expected results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sensitivity of a camera is most often measured by recording video segments while viewing a constant (space and time) scene. This video, commonly referred to as a noise cube, provides information about how much the signals are varying away from the average. In this work, we describe the systematic decomposition of noise cubes into components. First, the average of a noise cube (when combined with other cube measurements) is used to determine the cameras Signal Transfer Function (SiTF). Removing the average results in a cube that exhibits variations in both spatial and temporal directions. These variations also occur at different scales (spatial/temporal frequencies), therefore we propose applying a 3-dimensional filter to separate fast and slow variation. Slowly varying temporal variation can indicate an artifact in measurement, the camera signal, or the camera’s response to measurement. Slowly varying spatial variation can be considered as non-uniformity, and conventional metrics applied. Fast varying spatial/temporal noise is combined and evaluated through the conventional 3D noise model (providing 7 independent noise measurements. In support of the reproducible research effort, the functions associated with this work can be found on the Mathworks file exchange.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The new generation of sub-10µm pixel pitch bolometers are currently arriving on the market for uncooled LWIR detectors. Pixel pitches are now starting to get smaller than the wavelength of light. The effect on the requirements for new lenses is still an open question, which in turn relies on the increased efficiency of this new generation of micro-bolometers. This study looks into the possible improvements expected on the lens side, whether it is needed to compensate the lower performance of the detectors and at what cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Model-based performance assessment is an important tool in the design of electro-optical and infrared imagers and their performance comparison. It alleviates the need for expensive field measurements and can be used for parameter trade studies. TRM4 serves this purpose and features a validated approach to consider aliasing in the range performance assessment of focal plane arrays. TRM4.v3 was released in October 2021 and came with a set of new model features. One of the new capabilities is the performance assessment of imagers used for aerial imaging by computing the National Imagery Interpretability Rating Scale (NIIRS) as performance measure. The NIIRS values are calculated in TRM4 using the latest version of the General Image Quality Equation. In this paper, the result of a recent validation effort for the NIIRS calculation is reported. Current TRM4 development focusses on performance modeling of subpixel target scenarios and color cameras. For subpixel targets, the calculation of the signal-to-noise ratio (SNR) for the additional target signal is presented considering different target locations with respect to the detector matrix. Best-case and worst-case detection ranges are derived from specified threshold SNRs of detection algorithms. First results are shown using the TRM4.v3 sky background scenario feature to model ground-to-air imaging of flying targets. A first step for modeling color cameras is the extension of AMOP (Average Modulation at Optimum Phase) calculation, which is used in TRM4 to describe the spatial signal transfer characteristics along the imaging chain. It is shown how the sampling by sensors with Bayer filter and demosaicing can be included into the AMOP calculation procedure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many modern electro-optical systems incorporate multiple electro-optical sensors, each having unique wavebands, alignment, and distortion. Traditional laboratory testing requires multiple measurement setups for metrics like inter-channel sensor alignment, near/far focus performance, color accuracy, etc. In this study, a calibrated scene is developed for objective measurements of multiple electro-optical cameras from a single mounting position. This scene uses multiple targets (size and shape), multiple flat fields (blackbodies and spectralon panels), and temporal sources. Some targets work well in both the emissive and reflective bands, allowing for accuracy relative distortion to be measured. Specific attention was given to testing in the presence of scene-based algorithms such as auto-gain/level/exposure, where bright and dark objects are used to drive dynamic range. This approach allows for various measurements to be taken simultaneously and efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Of all sensor performance parameters, the conversion gain is arguably the most fundamental as it describes the conversion of photoelectrons at the sensor input into digital numbers at the output. Due in part to the emergence of deep sub-electron read noise image sensors in recent years, the literature has seen a resurgence of papers detailing methods for estimating conversion gain in both the sub-electron and multi-electron read noise regimes. Each of the proposed methods work from identical noise models but nevertheless yield diverse procedures for estimating conversion gain. Here, an overview of the proposed methods is provided along with an investigation into their assumptions, uncertainty, and measurement requirements. A sensitivity analysis is conducted using synthetic data for a variety of different sensor configurations. Specifically, the dependence of the conversion gain estimate uncertainty on the magnitude of read noise and quanta exposure is explored. Guidance into the trade-offs between the different methods is provided so that experimenters understand which method is optimal for their application. In support of the reproducible research effort, the MATLAB functions associated with this work can be found on the Mathworks file exchange.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The last decades have brought significant improvements in materials, microfabrication, manufacturing processes, microelectronic fabrication, optical design tools and microprocessing power. It has allowed the development of novel types and designs of electro-optical (EO) military systems having, among others, the following added capabilities: wide field of view, extended spectral response, multifunction devices, image fusion and embedded image processing. Meanwhile, the international standards that regulate the testing and evaluation of EO systems, developed in the 1990s, have not been updated to include those new capabilities that are important on the battlefield. As a result, those standards are often no longer suitable to characterize current state-of-art EO systems and to support major military EO systems acquisition projects. In this paper, we present an overview of some novel testing capabilities developed over the last decade at DRDCValcartier Research Centre that aim at comparing, in a controlled environment, the performance and limitations of EO military systems under different representative operational conditions. Those novel testing capabilities do not aim at replacing standard testing procedures, but rather at complementing them. Methodologies developed to test thermal imagers, wide-field-of-view night vision google, image intensifier tubes and lasers are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In an ideal world, each camera pixel would exhibit the same behavior and response when stimulated with an incoming signal. In the real world however, variations between pixel’s response (gain) and dark-current/extraneous-signal/etc. (offset) require a non-uniformity correction (NUC). The residual pixel to pixel variation following a NUC is the fixed pattern noise of the camera. For thermal cameras, the ability to NUC is critical, as the pixel’s gain and offset typically change with temperature. Moreover, the offset typically drifts in time, even when the camera is at equilibrium. These additional dependencies of time and temperature make the “fixed” pattern noise not fixed, and make measurement agreement between laboratories much more difficult. In this work, we describe a modification of the standard thermal camera noise measurement procedure and analysis (at some specified equilibrium temperature) that removes the time dependencies of fixed pattern noise measurement. Additionally, we describe a temporal measurement to characterize the time dependent nature of “fixed” pattern noise. We show that this behavior is stationary, and independent on the direction of time since the NUC was defined. The temporal behavior is well described by a combination of power-law and liner time dependence. With this, new metric s can be considered to evaluation how frequent to conduct a NUC dependent on operational requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern thermal imaging systems are widely used because of their broad military and commercial application range. The performance of the first generations of thermal imagers was limited by resolution and thermal sensitivity. Brightness and contrast adjustments were also the crux of the image quality. From a military user perspective, the amount of details and the interpretation of a scene depends, among others, on the experience of the user and on the time available to complete those adjustments. Modern imagers now feature embedded digital processing that can automatically adjust the device parameters in order to optimize the image quality. With the combined improvements in microprocessor power and microfabrication processes, digital processing enhanced the thermal imagers’ performance until they eventually became limited by their ability to react to different operational scenarios. That brings the need for testing the reaction of digital processing in such operational scenarios. Meanwhile, there were no significant modification in testing methodologies and metrics used for the assessment of thermal imagers. In this paper, we present DRDC-Valcartier Research Centre’s efforts to develop a test bench to measure the efficiency of the digital processing embedded in thermal imagers. The purpose of the testing methodology is to provide reliable, repeatable and user-independent metrics. Outputs quantitatively highlight the impact of digital processing for various operational situations and allow the performance of devices to be compared.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Scene generators and high-resolution displays carry the potential for creating adaptive scenes for camera characterization. We demonstrate a camera modulation transfer function (MTF) measurement technique using a commercial computer display. Once the proper conditions for MTF measurements are satisfied, other measurements like the instantaneous field of view (IFOV) and the distortion of a visible camera can be characterized using stimuli from the same display. We developed the measurement techniques using a high-fidelity imaging simulation tool that provides control of the scene and camera parameters. The high-fidelity simulation allowed replication of the laboratory setup and fast iteration of parameters which drastically reduced the method development time. Virtual prototyping of measurements in this way allows for sensitivity analysis to be conducted over measurement parameters. In this work we describe the method development for measurements with a commercial computer display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Measurements of SWIR, MWIR, and LWIR cameras are compared to spectral modeling. Measurements of response, noise, dark current, modulation transfer function (MTF), point spread function (PSF), and well capacity are compared to theoretical values. Cameras with dynamically changeable spectral cold filters and cold apertures are used across the 1 μm to 12 μm spectral range to investigate the ability of our straightforward models to predict performance. The effects of atmospheric conditions and target ranges are modeled and compared to experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Frame rate is one of the most important parameters to evaluate the performance of commercial longwave infrared focal plane arrays (LWIR FPA). However, in the literature, there are few works that investigate the frame rate effects on the behavior of the main figures of merit (FOM). In this work, the FOM of a LWIR FPA are analyzed changing the frame rate from 25 to 60. In order to verify the frame rate influence on the FOM, the FPA raw data are acquired when it is exposed to a wide area blackbody at temperatures of 25°C and 60°C. It is observed that, under certain conditions, changes in frame rate can improve the noise equivalent temperature difference (NETD) and the signal transfer function (SiTF) by 50% and 60%, respectively. Another point that deserves attention is the unpredicted emergence of a minimum point, when 45fps is employed, in the detector operability versus frame rate curve. As expected, the output voltage pixel response is also affected when the frame rate is changed. Particularly, for the 60°C temperature, variations greater than 60% on this voltage are observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Accurate radiometric calibration of IR sources can be challenging, but is required for advanced sensors being used today. Santa Barbara Infrared has developed a new test facility to provide spectro-radiometric calibration of extended area sources. The station comprises a Bruker Invenio Fourier transform infrared spectrometer or FTIR, a NIST-traceable, high-emissivity DB-04 blackbody reference and an automated stage for switching between the reference source and unit under test. The system uses a series of differential measurements to perform the radiometric calibration. The first output of the calibration is a spectral emissivity that can be used to calculate output radiance based on the temperature as measured in the well of the blackbody source. The second output of the calibration is a derived gradient term allowing the calculation of the temperature of the surface of the source based on the temperature of the thermometric measurement well and the temperature of the ambient environment. The additional gradient term allows for improved radiometric accuracy when operating at source and environment temperatures different from those at which the source was calibrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design, development and performance evaluation of an imaging infrared seeker system of the FOK missile will be presented in this paper. FOK is a guided missile iteratively developed since 2017 by the Students’ Space Association at Warsaw University of Technology, designed as a testbed for the development of control algorithms. Throughout the years of development the design evolved, and currently assumes the incorporation of an electro-optical seeker, enabling the missile to actively search and track ground-based targets. An infrared detection module was chosen as the core sensing component of the system. The module consists of an uncooled 640x512 microbolometer detector array integrated with dedicated optics, providing a high-resolution image in the 8–12μm spectral range. Since the seeker will be mounted at the front of the missile, an IR-transparent nose cone dome was required as part of the optical system. Two dome variants were designed, made out of silicon and germanium, respectively. Optical parameters of the imaging system, composed of the dome — objective — microbolometer array were calculated and confirmed using ray-tracing software. The designed domes were optimized in order to minimize image quality degradation. Key parameters considered during the calculations included: optical aberrations, modulation transfer function — MTF and field of view — FOV. Software-in-the-loop tests with the use of in-house developed simulation software will be performed together with hardware-in-the-loop tests based on capturing real images obtained by the tested system installed under a UAV. The tests’ goal was to check the performance of the developed guidance algorithms in an operational-like environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study describes a sensitivity analysis of absorption spectra for NIR-SWIR absorbing dyes relative to inverse analysis of diffuse-reflectance spectra. Absorption spectra of NIR/SWIR-absorbing dyes obtained by inverse spectral analysis provides information for estimating their dielectric response functions. Sufficient sensitivity of absorption spectra relative to inverse spectral analysis implies that estimated dielectric response functions can be used for construction of approximate effective medium models capable of estimating reflectance from dye formulations on substrates, e.g., fabrics. The specific concept of applying inverse spectral analysis to diffuse reflectance, measured with field spectrometers, for estimation of dielectric response, is considered here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A microlens array is an array of a plurality of micro-sized lenslets of the same shape arranged in a regular pattern, the pattern could be square or hexagonal. Microlens array is determined by shape of micro-sized lenslets and they are used for different purposed to shape laser. Because of these regular patterns of microlens, it provides uniform power density of laser. Powell lenses also create straight, uniform laser line by dispersing, in a controlled way, an input collimated beam. Powell lenses known as a laser line generator lenses, create straight, uniform laser lines by fanning out collimated beams in one dimension. In this study, first of all the information about microlens arrays and Powell lenses which are necessary for making optical design has been introduced. The optical designs using microlens arrays and Powell lenses will be examined. Then, experimental results have compared with theoretical results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The detection performance of an IR camera was analyzed using GRD analysis and NVTherm to detect a target size of 3m × 6m. The optical system design specifications were set with a F-number of 1.6, focal length range of 76.2mm to 24.456mm, and field of view of 9.1° × 6.1° - 29.9°× 22.0°, using a method to convert recognition range performance into GRD. We measured the image resolution performance of the camera produced based on the analysis results, in a laboratory environment. We installed a 4-bar target (I') with a spatial frequency corresponding to GRD (I) m at target distance A km on an optical collimator and acquired a photographic imagery of bar-target by setting the temperature difference corresponding to the target distance A km for distance simulation. The GRD resolution was defined as the resolution for which the number of clearly resolved images among the 50 acquired images was 80 % or higher (40 images or more). The measurement results confirmed that the GRD target corresponding to A km was well resolved. The detection range performance derived based on the GRD analysis was experimentally well demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.