PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The U.S. Army CECOM Center for Night Vision and Electro-Optics (C2NVEO) has originated a facility for Computer-generation of Realistic Environments with Atmospheres for Thermal Imagery with Optics and Noise (CREATION). Its application to produce imagery for two visual test series is discussed. Panoramic views of synthetic generic landscapes, with different degrees of clutter and inserted tanks, were produced for a search experiment. Close-up thermograms of vehicles were processed to simulate the impact of different thermal detector organizations, then used to analyze sampling effects. The methods of generating synthetic imagery chosen for these two tasks are compared to software that is readily available and reasons for the particular choices are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 1976-vintage LASERX computer code has been augmented to produce realistic electro-optical images of targets. Capabilities lacking in LASERX but recently incorporated into its VALUE successor include:
•Shadows cast onto the ground
•Shadows cast onto parts of the target
•See-through transparencies (e.g.,canopies)
•Apparent images due both to atmospheric scattering and turbulence
•Surfaces characterized by multiple bi-directional reflectance functions
VALUE provides not only realistic target modeling by its precise and comprehensive representation of all target attributes, but additionally VALUE is very user friendly. Specifically, setup of runs is accomplished by screen prompting menus in a sequence of queries that is logical to the user. VALUE also incorporates the Optical Encounter (OPEC) software developed by Tricor Systems,Inc., Elgin, IL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the past several years the U.S. Army Tank-Automotive Command (TACOM) has sponsored the development of computer models to simulate the thermal signature of concept vehicles. The entire simulation process involves the prediction of target and background temperatures, creation of the 3-D graphics database, and the addition of atmospheric and sensor effects to produce a simulated infrared or thermal image. This paper also describes the model integration necessary to create thermal signature animations for concept vehicles. Each model is described in some detail along with the appropriate interface to the other thermal models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for simulating infrared images based on Solids Modeling has been developed at ERIM. The method predicts temperatures of complicated objects that can be described easily using Solids Modeling. The key development is a thermal conduction model that is automatically created from the Solids Model. The thermal model is required to predict the temperatures of the object. The creation of the thermal conduction model is rooted in the concept of automatically converting the geometry from one kind of representation to another. An automatic conversion of geometry representation maintains the ease of use in creating complicated geometries expected from today's Solids Modelers while allowing temperature prediction to be practically accomplished. This method also allows multispectral image simulation to be accomplished with a single geometry representation. The key technique used for the creation of the thermal model is discussed along with a brief discussion of how the current method fits into the context of integrated, multispectral image simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process of developing a physical description of a target for thermal models is a time consuming and tedious task. The problem is one of data collection, data manipulation, and data storage. Information on targets can come from many sources and therefore could be in any form (2-D drawings, 3-D wireframe or solid model representations, etc.). TACOM has developed a preprocessor that decreases the time involved in creating a faceted target representation. This program allows the user to create the graphics for the vehicle and to assign the material properties to the graphics. The vehicle description file is then automatically generated by the preprocessor. By containing all the information in one database, the modeling process is made more accurate and data tracing can be done easily. A bridge to convert other graphics packages (such as BRL-CAD) to a faceted representation is being developed. When the bridge is finished, this preprocessor will be used to manipulate the converted data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Department of Defense has a requirement to investigate technologies for the detection of air and ground vehicles in a clutter environment. The use of autonomous systems using infrared, visible, and millimeter wave detectors has the potential to meet DOD's needs. In general, however, the hard-ware technology (large detector arrays with high sensitivity) has outpaced the development of processing techniques and software. In a complex background scene the "problem" is as much one of clutter rejection as it is target detection. The work described in this paper has investigated a new, and innovative, methodology for background clutter characterization, target detection and target identification. The approach uses multivariate statistical analysis to evaluate a set of image metrics applied to infrared cloud imagery and terrain clutter scenes. The techniques are applied to two distinct problems: the characterization of atmospheric water vapor cloud scenes for the Navy's Infrared Search and Track (IRST) applications to support the Infrared Modeling Measurement and Analysis Program (IRAMMP); and the detection of ground vehicles for the Army's Autonomous Homing Munitions (AHM) problems. This work was sponsored under two separate Small Business Innovative Research (SBIR) programs by the Naval Surface Warfare Center (NSWC), White Oak MD, and the Army Material Systems Analysis Activity at Aberdeen Proving Ground MD. The software described in this paper will be available from the respective contract technical representatives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development and evaluation of algorithms to detect targets against cloud backgrounds requires a comprehensive understanding of the clutter properties such as the radiance distributions, textures, edge effects, etc. A number of measurement programs are collecting data for this purpose. However they are limited by the vast amounts of data required, and their limited resources for obtaining the data. This paper will describe and show results from a first principles infrared cloud scene radiance model. The work is sponsored by the Naval Surface Warfare Center (NSWC) through a Small Business Innovative Research Program (SBIR) to support IRAMMP (Infrared Analysis Modeling and Measurement Program -- formerly BMAP) as part of the Navy's Infrared Search and Track effort. The model is designed to handle arbitrary viewing geometries, atmospheric conditions, and sensor parameters. The output is a two dimensional (n x m pixels) scene radiance map which can be used by system designers, data takers, and analysts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we show that the mathematical theory known as image algebra not only incorporates the mathematics underlying artificial neural networks, but also provides for novel methods of neural computing. These methods are not covered by current neural network models but are an intrinsic part of the image algebra. In this sense, image algebra provides a mathematical framework for a more general theory of artificial neural networks and a language for computing with neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inversion formulas for tridiagonal Toeplitz matrices are used to give exact inversion formulas for rank one convolution operators. These formulas are used to analyze the existence and behavior of the inverse in both the diagonally dominant and non-diagonally dominant cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A binary image is represented by a polynomial in two variables over GF(2) and several algebraic operators are developed in this environment to process images, for example, to find the contour, shrink, magnify and approximate images. The data structure for storing images by polynomials is obtained. The image processing technigues developed can be used to process gray-level images and also colored pictures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been well established that the Air Force Armament Technical Laboratory (AFATL) image algebra is capable of expressing all linear transformations [7]. The embedding of the linear algebra in the image algebra makes this possible. In this paper we show a relation of the image algebra to another algebraic system called the minimax algebra. This system is used extensively in economics and operations research, but until now has not been investigated for applications to image processing. The relationship is exploited to develop new optimization methods for a class of non-linear image processing transforms. In particular, a general decomposition technique for templates in this non-linear domain is presented. Template decomposition techniques are an important tool in mapping algorithms efficiently to both sequential and massively parallel architectures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Development and testing of image processing algorithms for real-time aerospace pattern recognition applications can be extremely time consuming and labor intensive. There is a need to close the gap between high-level software environments and efficient implementations. Image algebra is an algebraic structure designed for image processing that can be used as a basis for a high-level algorithm development environment. Systematic methods for mapping algorithms represented by image algebra statements to specific architectures are being studied. In this paper we discuss template decomposition, a problem encountered in mapping image algebra statements to combinations of parallel and pipeline architectures. In particular, we show that the gray scale morphological template decomposition problem can be viewed as a linear problem, even though morphological transformations are nonlinear. We show how methods for solving linear programming problems and, in particular, the transportation problem can be applied to template decomposition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the incomplete and partial nature of information in the computer vision, multisensor fusion and spatial reasoning problem domains, the image algebra is extended to a stochastic image algebra and a stochastic world model. These stochastic models provide a systematic approach to the parallel or neural network implementation of algorithms in computer vision, multisensor fusion and spatial reasoning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A current critical problem in Automatic Target Recognition (ATR) technology is the inability to effectively evaluate ATR systems performance and component algorithms. Beyond the problem of evaluating system performance, it is often impossible to determine why a system is performing poorly under certain circumstances. This is largely due to the relatively unsophisticated tools and methods currently employed to extract and analyze the vast quantities of data processed by such systems. The amount of time and effort dedicated to testing, evaluation, and refinement of complex ATR systems and their component algorithm is by far the longest stage in the life cycle of the development process. Adequate tools, techniques, standards, and groundtruthing are critically needed for effective diagnostics and evaluation of next generation ATR systems. In this paper, we present a thesis on the critical issues of ATR evaluation and potential solutions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a Bayesian multitarget identification algorithm for a multisensor airborne surveillance system. The identification algorithm represents a part of the joint multitarget tracking and identification algorithm derived for the airborne surveillance system. We show that the addition of identity to the position and velocity state for each target improves the capability to associate sensor reports with target tracks. This paper also formulates a generalized model for the sensor observables used for target identification: the generalized model is used to develop a recursive identification algorithm; it is also used to evaluate the amount of information provided by each of the sensor observables for target identification. Results obtained from a prototype of the decision aid demonstrate the effectiveness of the identification algorithm to identify targets in a multitarget surveillance scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is an essential step in every practical image processing system. Current image segmentation algorithms suffer from a well known problem- They have poor performances on images different from the ones that were used in their initial development and training stages. In this paper we discuss a system concept for automatic design of segmentation algorithms based on image and objects metrics, and knowledge of image processing primitives.The proposed system concept makes makes use of Planning techniques, discussed in Artificial Intelligence.This paper provides the essential elements of the next generation Automatic Segmentation Design (ASD) systems that will not be scenario dependent. Applications of this concept include robotic vision, automatic target recognition, and image understanding systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method which forms beliefs and priorities for long range detected objects in Infrared imagery is described. Features such as the motion, context, and crude shape are used to form positive and negative evidence of the priority and class of the detections. The features that are used are described along with the method for calculating them. Two methods that determine the detection priority and class belief are also described and the result of using this belief system on several cinematic sequences is shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The synthetic estimation filter (SEF) was previously designed for implementation in an optical correlator. The intent was to find a simple method that would reduce the number of reference images necessary to track six degrees of freedom of an object, while avoiding the more complicated computations necessary in other methods of similar intent. Initial laboratory results are shown for estimating in-plane rotation with a minimal number of filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While performing the photo interpretation task using very high resolution images, the resolution of the image is often reduced to make its processing feasible. However, in low resolution images, it becomes quite difficult to segment and locate targets of interest such as aircraft, which are relatively small. Further, in recognizing aircraft, it is generally assumed that aircraft are already located. The emphasis is placed on model matching for recognizing isolated aircraft. However, locating potential areas in the images, where aircraft may be found, is non-trivial since it requires an accurate labeling of an image. We have developed a Knowledge-Based Photo Interpretation (KEPI) system that analyzes high resolution images. This system locates aircraft by first finding large structures in low resolution images and focusing attention on areas such as tarmacs, runways, parking areas, that have high probability of containing aircraft. Higher resolution images of the regions that are the focus of attention are used in subsequent analysis. The system makes extensive use of contextual knowledge such as spatial and locational information about airport scenes. We show results using high resolution TV data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The conventional Z transform for images is used in creating a spatial parallel algorithm for convolution. Z transforms and corresponding spatial algorithms are also given for wraparound convolution, Mellin convolution as well as for the grey value dilation and erosion operations in mathematical morphology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Geometric image transformations are of interest to pattern recognition algorithms for their use in simplifying some aspects of the pattern recognition process. Examples include reducing sensitivity to rotation, scale, and perspective of the object being recognized. The NASA/Texas Instruments Programmable Remapper can perform a wide variety of geometric transforms at full video rate. We propose an architecture that extends its abilities and alleviates many of the first version's shortcomings. We discuss the need for the improvements in the context of the initial Programmable Remapper and the benefits and limitations we have seen it deliver. We discuss the implementation and capabilities of the proposed architecture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Real-time electronic and optical pattern recognition systems use correlation as a means for discriminating objects of interest from unwanted objects and residual clutter in an input scene. However, correlation is not a good technique for certain gray-level input images particularly when the background has a high average value such as encountered in automatic target recognition environments. A better algorithm for these types of environments involves computing the difference-squared error between a reference template and the input image. This algorithm has been used for many years for recognizing gray-level images; however, because of the large number of bits of precision required to perform these computations, the algorithm is difficult to implement in real-time using an electronic embedded computer where small size and low power are at a premium. This paper describes a method for implementing the difference-squared algorithm on an acousto-optic time-integrating correlator. This implementation can accommodate the high dynamic range requirement which is inherent in gray-scale recognition problems. The acousto-optic correlator architecture is a natural fit for this implementation because of its capability to perform two-dimensional processing utilizing relatively mature one-dimensional input devices. Furthermore, since this architecture uses an electronically stored reference, rotational and scale variances can be accommodated by rapidly searching through a library of templates, as first described by Psaltis2. The ability to implement the difference-squared algorithm in an acousto-optic correlator architecture has the potential for solving many practical target recognition problems where real-time discrimination of gray-level objects using a compact, low power processor is required.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid computer-controlled optical correlator has been developed, assembled, and evaluated for use as a rotation invariant multiple target recognition system. The system consists of an optical correlator with magneto-optic spatial light modulators (MOSLMs) at the input and filter planes and a vidicon at the correlation plane. A COMPAQ 386 (IBM compatible) personal computer with a frame grabber board is used to acquire, binarize and load binary amplitude-only video images to the input MOSLM, to write sequential stored Hartley binary phase-only filters to the filter MOSLM, and to sample and statistically analyze correlation plane data in order to locate and recognize objects of interest in the input scene. The sequential correlations and output data samples are obtained at near video rates (15 per second, limited by the peak detection algorithm and the asynchronous video interface), allowing multiple targets at any in-plane rotation in a given input image to be located and classified in less than 5 seconds. The results of a computer simulation and experiments indicate that this system can correctly identify objects within its target class more than 95% of the time, even in the presence of severe clutter and high noise densities (noise toggles 10% of all binary pixels).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hybrid optical/digital system for objects tracking in a sequence of images is described. The backbone of the system is a real-time optical joint transform correlator using a liquid crystal television. The massive parallelism, high processing speed and adaptive property of this optical system assure high correlation between objects in two sequential frames. The relative position of the object can then be determined based on the location of the correlation peak. System performance is evaluated and experimental demonstrations are presented. This system also has the potential to perform real-time multi-object tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel architecture for two VLSI ICs, an 8-bit and 12-bit version, which execute real-time 3x3 kernel image convolutions at rates exceeding 10 ms per 512x512 pixel frame (at a 30 MHz external clock rate). The ICs are capable of performing "on-the-fly" convolutions of images without any need for external input image buffers. Both symmetric and asymmetric coefficient kernels are supported, with coefficient precision up to 12 bits. Nine on-chip multiplier-accumulators maintain double-precision accuracy for maximum precision of the results and minimum roundoff noise. In addition, an on-chip ALU can be switched into the pixel datapath to perform simultaneous pixel-point operations on the incoming data. Thus, operations such as thresholding, inversion, shifts, and double frame arithmetic can be performed on the pixels with no extra speed penalty. Flexible internal datapaths of the processors provide easy means for cascadability of several devices if larger image arrays need to be processed. Moreover, larger convolution kernels, such as 6x6, can easily be supported with no speed penalty by employing two or more convolvers. On-chip delay buffers can be programmed to any desired raster line width up to 1024 pixels. The delay buffers may also be bypassed when direct "Sum-Of-Products" operation of the multipliers is required; such as when external frame buffer address sequencing is desired. These features make the convolvers suitable for applications such as affine and bilinear interpolation, one-dimensional convolution (FIR filtration), and matrix operations. Several examples of applications illustrating stand-alone and cascade mode operation of the ICs will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A procedure is given for optimizing phase-only filters with a phase range restricted to a continuous but limited range of phase. The signal-to-noise (SNR) ratio of this optimized "windowed" phase-only filter (WPOF) is compared with an optimized binary phase-only filter (BPOF) using a computer simulation. These results show that in certain cases the gain in SNR using the WPOF is significant.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.