NOAA plans to build a Geostationary Lightning Mapper (GLM) whose objectives are providing continuous, full-disk
lightning measurements for storm warning and science applications. Due to limited telemetry bandwidth, much of the
detection processing will be done autonomously.
Since the contractor is responsible for the autonomously generated output, which is detection reports - not images, we
took a design approach that did not stop with a signal to noise calculation but instead simultaneously considers the
effects of hardware configurations and algorithm choices. Key requirements for GLM are the probability of detection
(PD) and probability of false alarm (PFA). Our approach allows us to provide a system with the best PD and PFA
performance and the best value. We have accomplished this by developing an analytical model that can find "knees-in-the
curve" in our hardware configuration selections and an algorithm prototype that provides realistic end-to-end
performance. These tools allow us to develop an optimal system since we have a good handle on realistic performance
prior to launch.
Our tools rely on descriptions of lightning phenomena embodied in probability densities we developed for the amplitude,
temporal and spatial distribution of lightning optical pulses. The "analytic model" uses tabulated integration formulae
and conventional numerical integration to implement an analytical solution for the PD estimate. The average PD is
quickly computed, making the analytic model the choice for rapid evaluation of sensor design parameter effects.
The "algorithm prototype" utilizes simulation, consisting of data cubes of time elapsed imagery containing lightning
pulses and structured backgrounds, and prototyped detection and false alarm mitigation algorithms to estimate PD and
PFA. This approach provides realistic performance by accounting for scene spatial structure and apparent motion.
We discuss the design and function of these tools and show results indicating the variation of PD and PFA performance
with changes in sensor and algorithm parameters and how we use these tools to improve our instrument design
capabilities.
One of the key requirements of real-time processing systems for remote sensors is the ability to accurately and automatically geo-locate events. This capability often relies on the ability to find control points to feed into a registration-based geo-location algorithm. Clouds can make the choice of control points difficult. If each pixel in a given image can be identified as cloudy or clear, the geo-location algorithm can limit the control point selection to clear pixels, thereby improving registration accuracy. Most cloud masking algorithms rely on a large number of spectral bands for good results, e.g., MODIS, whereas with our sensor, we have only three simultaneous bands available. This paper discusses a promising new approach to generating cloud masks in real-time with a limited number of spectral bands. The effort investigated statistical methods, spatial and texture-based approaches and evaluated performance on real remote sensing data. Although the spatial and texture-based approaches did not exhibit good performance due to sensor limitations in spatial resolution and too much variation in spectral response of both surface features and clouds, the statistical classification approach applied to only two bands performed very well. Images from three daytime remote sensing collects were analyzed to determine features that best separate pixels into cloudy and clear classes. A Bayes classifier was then applied to feature vectors computed for each pixel to generate a binary cloud mask. Initial results are excellent and show very good accuracy over a variety of terrain types, including mountains, desert, and coastline.
An integrated optics, controls, and structures modeling tool has been developed to analyze the performance of complex electro- optical (EO) sensing systems. Hosted within an object-oriented graphical environment (Khoros) developed by the University of New Mexico, complex systems such as active ground-based telescopes, airborne spectrometers, and space-based sparse array telescopes can be simulated and rapidly evaluated. The TAOS model integrates data products from existing codes such as MATLAB, CODE 5, NASTRAN, and others to allow multi-disiplinary parametric analysis of system performance. Because the model includes accurate physical optics and radiometric representations, almost any function of an optics system can be quickly generated and studied. In addition, degrading effects of dynamic structures, use of compensating control systems, and effects of the observing environment (wind load, boundary layer, and seeing) can also be included. Use of this simulation tool on NASA programs such as the Space Telescope Imaging Spectrometer has reduced design schedules by factors of three. Other typical analysis applications include the study of atmospheric compensated imaging systems using combined adaptive optics/post-processing techniques, simulation of hyper-spectral imagers, and methods for achieving coherent phasing of telescope arrays. This paper also provides a progress report on TAOS modeling of the European Southern Observatory (ESO) Very Large Telescope (VLT).
Images obtained from ground-based telescopes are distorted due to the effects of atmospheric turbulence. This disturbance can be compensated for by employing adaptive optics (predetection compensation), image reconstruction techniques (postdetection compensation), or a combination of both (hybrid compensation). This study derives analytic expressions for the residual mean squared error of each technique. These mean squared error expressions are then used to parametrically evaluate the performance of the compensated imaging techniques under varying conditions. Parameters of interest include actuator spacing, coherence length, and wavefront sensor noise variance. It is shown that hybrid imaging allows for the design of lower cost systems (fewer actuators) that still provide good correction. The adaptive optics system modeled includes a continuous faceplate deformable mirror and a Hartmann Shack wavefront sensor. The linear image reconstruction technique modeled is deconvolution via inverse filtering. The hybrid system employs the adaptive optics for first order correction and the image reconstruction for higher order correction. This approach is not limited to correction of atmospheric turbulence degraded images. It can be applied to other disturbances, such as space platform jitter, as long as the corresponding structure function can be estimated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.