PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Preprocessing that enables accurate matching of two images taken by sensors located at different points in space is presented. Separation between the sensors results in perspective changes that appear as geometric distortion. Two methods for removing the distortion are described. One method applies to sensors that measure range. The second method applies to sensors that do not measure range. Descriptions of the applicable sensor data formats, rationale for the preprocessing approaches and transformations used to implement the approaches are included. Examples comparing images before and after preprocessing are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Rockwell Pattern Matcher (RPM) is a feature based image matcher which has been demonstrated on passive IR and active laser images. Edge features are extracted and used to match electro-optical images. Matches have been made to reference images at the same wavelength as well as with reference imagery at the optical wavelength. In addition, a technique using three dimensional edge references has been utilized to automatically compensate the effects of geometric distortions due to different perspectives. Recent advances in computing technology make it passible to perform the required digital processing for a variety of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of Lockheed's phase correlation image matching technique to missile guidance has been sys-tematically investigated during the past several years. An effective approach to the scene distortion problem has been developed and verified for a variety of sensor types by computer simulation. The method involves the computation of a full bandwidth Fourier phase difference matrix for the reference and sensed scenes to be matched, followed by the application of the inverse Fourier transform to the phase matrix modified by a series of bandwidth-reducing filters to produce a set of trial correlation functions. The "best" matchpoint is then selected using parameters derived from each correlation function. A novel method for onboard reference map storage has been developed using quantized Fourier phase angles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Correlation is a common and powerful method for updating inertial guidance systems. Performance of correlation methods degrades in the presence of geometric distortion between the images being correlated, or when the image structure is strongly asymmetric. The Multiple Subarea Correlation (MSC) technique has been developed to reduce performance losses due to these effects. The MSC technique consists of selecting a set of subareas from the reference image, and correlating each reference subarea against the sensed image, producing a correlation function for each subarea. There must be at least three subareas; typically six subareas are selected. The correlation functions are processed to determine a consistent set of local maxima which are in gross agreement as to the relative displacement of the two images. Then, using this set of local maxima and the known subarea locations, a least-squared-error estimate of an affine transformation between the two images is computed. The transformation is applied to the update point in the reference image to find the corresponding point in the sensed image. The technique allows selection of subareas with the most favorable content for correlation. Optimum subarea dimensions exist and depend upon the amount of distortion expected. The variance of the update point position is shown to be inversely proportional to the number of subareas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Three-Dimensional Surface Shell Correlator uses range-only data from an active ranging sensor and a 3-D model of the target area stored in memory to guide a missile during its terminal phase. For each trial position the algorithm simulates a sensed range image using the 3-D model and a fast hidden-surface suppression routine, and compares it to the actual sensed range image. The resulting correlation surface is searched selectively rather than exhaustively to find the best estimate of vehicle position. A selective search is described which is highly efficient because it uses a certain inherent "ridge" structure in the correlation surface to guide it.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The essential problem of scene-matching is that of the differences that may exist between the two pictures that are to be matched. These are classified as differences in intensity, structure, and geometry. Preprocessing operations to extract invariants, such as edges or higher level features are required to deal with differences in intensity and structure. However, even when the pictures have been processed in this way to eliminate or minimize intensity and structural differences, the pictures may still exhibit substantial differences (of scale, rotation, aspect, etc) if they were taken from two different camera positions and directions. In this paper it is shown how such geometrical differences can be dealt with by means of a frame-warping technique called Address Modification. The required frame warp (for a coplanar scene) is completely defined in all cases by eight parameters. These eight parameters can be derived by a serial processing operation performed on the two pictures. The parameters can then be applied to one of the pictures to bring them into exact area registration with each other over the full picture frame.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A feature-based scene-matching technique is described that was developed for autonomous airborne guidance systems. Reference and sensed models of scenes containing man-made structures are matched using line and vertex features derived from the scenes. Feature weighting, based on model feature content and matchpoint location, is used in performing transformations between the reference and sensed models that lead to high-accuracy matchpoint location in the sensed scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two methods are described in this paper for measuring internal machine part clearances by digital processing of industrial radiographs. The first technique requires mathematical modeling of the expected optical density of a radiograph as a function of machine part motion. Part separations are estimated on the basis of individual image scan lines. A final part separation estimate is produced by fitting a polynominal to the individual estimates and correcting for imaging and processing degradations which are simulated using a mathematical model. The second method involves an application of image registration where radiographs are correlated in a piecewise fashion to allow inference of relative motion of machine parts in a time varying series of images. Each image is divided into segments, which are dominated by a small number of features. Segments from one image are cross - correlated with subsequent images to identify machine part motion in image space. Since the magnitude of a correlation peak is a function of the similarity between an image segment and a subsequent image, it can be used to infer the presence of relative motion of features within each image segment thus identifying feature boundaries. Correlation peak magnitude is also used in assessing the confidence that a particular motion has occurred between images. The rigid feature motion of machine parts requires image registration by discontinuous parts in contrast to the continuous image deformations one encounters in projective perspective transformations characteristic of remote sensing applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of THASSID (Terminal Homing Applications of Solid-State Imaging Devices) is to develop an advanced optoelectronic solid-state seeker with true fire-and-forget capability. The system comprises an image intensified CCD camera mounted on a gimbaled, momentum-stabilized platform and a digital tracker of advanced design. The camera employs a 490 x 327 picture element CCD array encapsulated in a focused electron tube. This paper describes the composite tracker containing three major subsystems: adaptive gate centroid tracker, correlation tracker, and moving target tracker. System operation is controlled by a Texas Instruments SBP 9900 microprocessor with substantial portions of the tracking algorithms allocated to the microprocessor. Microprocessor support of the composite tracker includes adaptive image quantization, correlation feature selection, feature update, feature replacement, scene stabilization computations':, control of electronic de-zoom, centroid gate position, size, and thresholding, and moving-target-tracker search algorithms. The microprocessor also establishes the operating mode of the correlation tracker and its role in relation to the other tracking subsystems. During the terminal impact phase of the mission the correlation tracker is the primary tracking subsystem. A system incorporating these concepts was delivered to the U. S. Army Missile Research and Development Command for testing and evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional cross-correlation techniques are applied to the problem of image registration under the assumption of small geometric distortion. Optimum filter functions for arbitrary window functions are derived for two performance measures of interest: peak-to-sidelobe ratio; and mean-square registration error; the latter is examined in terms of the contribution due to distortion and the contribution due to noise. A generalized Lagrange multiplier approach is used to derive approximate solutions to both the random image and deterministic image cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two-dimensional cross-correlation techniques are applied to the problem of image registration under the assumption of small geometric distortion. Optimum window functions are derived for two performance measures of interest: peak-to-sidelobe ratio and mean-square error. The latter is examined in terms of the contribution due to distortion and the contribution due to noise. The case of Gaussian autocorrelation functions is examined in detail. Results for applying the theoretically derived window functions to real data are presented, showing significant improvement in correlator performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The basic scene matching problem consists of locating a region of an image with the corresponding region of another view of the same scene. In the general case, the images are produced by completely different sensors at different viewing geometries. Prior to the scene matching, geometric and intensity transformations were performed on the images to bring the matching elements and their intensity into one-to-one correspondence. Objects of interest as represented by subimages of one scene were located in the other using scene matching techniques with edges and invariant moments as measurement features. Operating characteristics of the two matching methods were then presented in terms of the probability of a match as a function of the probability of false fix.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic aperture radar (SAR) images obtained in real time on a moving vehicle can provide a means for obtaining fix data for the vehicle navigation system. Reference features are located in the SAR images through the use of map matching techniques, with each match providing a measurement of range and range rate to a known reference point. Three matches or fixes made in different directions can provide data for a complete position and velocity determination. A map matching technique has been developed for use with SAR images that utilizes a reference tem-plate that encodes only the shape (and not the difficult-to-predict image intensity levels) of the selected reference feature. Through an adaptive and localized normalization of the sensed image pixel amplitudes a matching metric is computed that is a strong function of the degree of shape match of the sensed image and the reference template but is only weakly dependent on the image intensity and contrast. This results in the reference feature being acquired and located with high probability even in the presence of competing features with possibly higher contrast. The map matching algorithm is described and results of theoretical analysis of its performance charac-teristics are presented with specific attention given to the effects of scene scintillation or speckle in the sensed imagery. The algorithm has been used on a large data base of SAR imagery with good success. Several examples are included to indicate typical performance for both urban and rural environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer code has been developed to predict the detector output from a broad class of optical trackers with realistic input scenes. The approach is based on Fourier transform techniques with some new manipulations being introduced in frequency space. The code has been validated by use of a laboratory tracker with complex extended targets and point sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multispectral image data can be fused into a single image suitable for processing by the use of pattern recognition. Illustrative images using maximum probability discrimination of multivariate normal classes are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The parametric Wiener filter is often used to deblur images that are relatively noise-free. If noise is more severe, the restored image may be obscured by a granular pattern that results when the noise is subjected to the deblurring filter. This effect may be reduced by using a larger noise parameter, but this leads to a restoration that is less sharp. We describe how the noise parameter may be varied from pixel to pixel, so that it is larger only where noise is greater. Pixels with low signal-to-noise ratios are identified by a thresholding process and by comparison with nearest neighbors. The effects of the estimated Wiener spectra on the restored image are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital data recorded in-flight on wideband magnetic tapes during a recently concluded flight test of a synthetic aperture radar (SAR) are available to qualified members of the technical and scientific community. These data were gathered during a 72-flight USAF-sponsored program making use of real-time digital signal processing in a forward looking advanced multimode radar (F LAM R). For the purpose of making the data available to qualified users a F LAM R SAR data bank is maintained for the U. S. Air Force Avionics Laboratory by Applied Research Laboratories, The University of Texas at Austin. The data bank, its contents, and its facilities are briefly discussed. The digital data formats and features are identified and illustrations are presented. Examples of the four resolutions of imagery available, together with aerial photographs to approximately the same scale for several of the scenes, are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improved object data has been extracted from multiple frames of poor quality imagery. The individual images had been degraded by propagation through atmospheric turbulence or water waves, and then mixed with instrumental noises. Using knowledge of the image formation and degradation process, Gregory's technique for "moments of good seeing" used in optical astronomy was implemented with digital processing. Digital implementation allowed the techniques to be applied locally to isoplanatic regions for those cases with nonuniform and nonstationary random media. The statistically correct location of the object was assumed to be determined by the long term average of the image. After proper registration a Kalman filter, adapted for correlated disturbances, was used to sharpen the mean-square spread of gray levels for each pixel with respect to multiple frames. An enhancement factor Anâœ"n for signal to noise ratio was obtained by uniformly averaging n-correlated disturbances. The best that one could do for purely random noise was a uniformly weighted average, and a recursively weighted average can not be any worse than a uniformly weighted one. Laboratory simulation of underwater objects and field data with atmospheric degradation has successfully illustrated the proposed method of digital image multiple-frame restoration, i.e. local implementation of "moments of good seeing" and Kalman filtering for correlated distrubances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a grading procedure that evaluates the performance of edge operators and thus provides feedback for improvement of the operators. The quantitative grade permits the optimization of magnitude threshold levels as well as thinning and linking criteria. Operator performance is shown to improve by a factor of two on aerial imagery using this technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Synthetic Aperture Radar (SAR) is a radar system that processes the return signal to achieve the effect of having a larger aperture than the one provided by the physical dimensions of its antenna. The processing consists of a weighted summation of regularly spaced samples from the signal history, hence of logic for the arithmetic and storage for the signal history. LSI and VLSI technology offer some beautiful ways to implement this computation in chips in which the storage and logic functions are commingled. The SAR problem discussed in this paper is based on actual requirements set forth by NASA for a spaceborne application. The requirements for high resolution and high quality necessitate a data sampling rate of 7.5 MHz . For each data value 1,025 4-bit complex multiply+add operations are needed, which is equivalent to 7.7 GHz complex multiply+add operation rate. Since this rate is much too high for general purpose systems, a special-purpose device was sought. This paper discusses two architectures based on parallel operation of 1,025 identical cells, each of which is capable of performing arithmetic, storage, and several control operations. The operation rate in each device is only 7.5 MHz, which is quite manageable, especially with the help of a substantial degree of pipelining. A computational-mathematical analysis is used as a primary tool for evaluating the design and some of its tradeoffs. Two different approaches are discussed and compared; both are based on having 1,025 identical cells working in parallel, but differ in their dual approaches to the flow of data. The mathematics require a relative-motion of the data with respect to some (relatively) constant sets of coefficients. In one approach the coefficients are held stationary in space, and the data flows past them; in the other, the data is held and the coefficients flow past. The paper discusses the architecture, both approaches, some of the control issues, and most important, some aspects of the methodology of the design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A critical problem for the cruise missile is the development of image processing techniques applicable to target acquisition for an autonomous terminal homing system which depends upon an on-board comparison of a sensed scene with a stored replica of a predesignated target area. Extensive efforts are currently in progress to develop algorithms based upon area correlation and feature matching techniques for accurate registration of sensed and reference imagery. Image intensity matching depends upon several unpredictable factors such as time of day or year, weather, changes in scale, viewpoint, and perspective, spectral and sensor characteristics, etc. In contrast, one of the most invariant properties of a scene is its geometric form. A sensed height distribution of a target scene can be determined passively from dynamic imagery by exploitation of the concept of motion stereo: over a sequence of frames, the scene is continuously viewed from changing observation points as the vehicle moves. The accuracy to which elevations can be determined is based upon sensor and vehicle parameters, geometry, image characteristics, and the nature of the processing algorithms. In principle, all that is required for depth determination is a single pair of frames. In practice, however, real imagery is corrupted by noise, sensor jitter, imperfect knowledge of vehicle trajectory, etc., and it is necessary to infer elevation from a successively refined statistical best estimate obtained from averaging results over several pairs of frames. Preliminary results of motion stereo process-ing to obtain object depth from image coordinate trajectories (obtained by correlation tracking) will be described, and efforts to extend the techniques to computationally efficient methods of area processing will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital analysis of the effect of terrain-target interactions on the signal-to-clutter (S/C) ratio of the output of a matched filter is made. Three terrain types (wood-land, village, and roadside) were included in the investigation. A photographic image of an M-60 tank used as the target was digitized (512 x 512 pixels) and a matched filter constructed digitally, using a two-dimensional FFT algorithm. The matched filter was then used to obtain the correlation between the target and the various terrain types to obtain the signal-to-clutter ratio. Various high frequency versions of the matched filter were systematically investigated for possible optimization in the S/C ratio in an attempt to optimize the design which was eventually realized in analog optical form. The computational aspects of the problem, from the point of view of doing digital image analysis on a minicomputer with limited memory, is also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A technique capable of improving the navigation system accuracy of operational systems using low-altitude correlation update information is desired. The technique makes use of the unique signature of the land mass passing beneath the vehicle to establish its position history and update its inertial navigation system. The generic types of sensors available to provide external mapping data for low-altitude correlation, the correlation performance measures developed for system analyses and the simulation studies performed to validate these performance measures are described. This paper will emphasize the ability to and the importance of developing analytical techniques in predicting the behavior of correlation update or image matching systems both in terms of fix accuracy and probability of correct correlation or acquisition. This type of analysis validated first by simulation and later by flight test results provides a solid basis on which to develop the system and bring it into operational utility with a high degree of confidence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Change detection has many objectives: a primary one in the context of defense applications is the detection of newly introduced objects in a scene which has been previously analyzed to establish its contents. This cognition of deliberately introduced objects is of topical interest and significance, particularly in view of the application potential to the problem of verification of adherance to terms of the impending Strategic Arms Limitation Treaty (SALT) agreements. This problem of cognition is construed here as one of learning in partially exposed environments. The available methods for such learning are reviewed to determine their potential for cognition of new entrants to the scene. A relative assessment of these alternative approaches is presented as an aid to the selection of the most appropriate tool in the context of a specific application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method for characterizing scene content from aerial images is presented. The method is demonstrated for a building complex scene for which a three dimensional data base and corresponding aerial images were available. Intuitively, the complexity of the building scene as viewed in a projected image is proportional to the number of vertices visible in the view. The greater the number of vertices, the greater the complexity of the scene. To automate this approach, one must automatically locate vertices from aerial images of the scene and determine relations among the vertices. Objective measures of scene content should not onl,y basically agree with the intuitive measures, but also possess certain desirable mathematical properties. Two such measures, structural entropy and structural content, which were previously developed, are applied to the-building scene and experimental results which illustrate the variation of these measures with range, azimuth and elevation are provided. One application of the scene content measures is the prediction of overall scene content characteristics performance support for map matching systems. To illustrate this application, an error analysis is presented of the mean square error in the transformational computation between a three dimensional scene and the corresponding two dimensional projected images, given a number of corresponding vertices. The analysis illustrates that the best possible performance depends heavily upon the vertex location accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper notes the impact of data structures on digital image processors and gives an initial discussion of applications of probabilistic information theory to image relational databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digitized images have been two-dimensionally transformed to the Haar sequency domain. High-sequency boosting was performed and the inverse Haar two-dimensional transform applied. The resulting image was then raster-scanned with a continuously adaptive lattice filter. This procedure was applied to a simple image of a photographic step tablet and a complex scene. All of the lines of the step tablet were well defined over the whole dynamic range. Useful definition of lines in the complex scene was obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper reports development of an automatic procedure for measuring and representing three-dimensional geometry of man-made sites from stereo reconnaissance photography. The ability to derive such information from remotely sensed data has application within both the military and intelligence communities. Studies of typical reconnaissance have shown that precise models can be extracted, but current methods are laborious, and rely heavily on operator insight. This work addresses development of an automatic and digital method. Attention is restricted to sites composed of planar surfaces (e.g. buildings). For an automatic stereo system, these sites present the severe problems of abrupt change in slopes and image occlusion. These features thoroughly disrupt application of traditional automatic stereo techniques, which have been developed to model rolling natural terrain. The system reported here represents an entirely new design, based on correlation but specifically structured to the problem features of planar surface sites. A demonstration is achieved, on reconnaissance imagery, of automatic stereo measurement of a building complex. This work is expected to form a basis for advanced work in both automated and computer aided stereo technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Shadows can add significant edge and surface detail to imagery and thus substantially increase the performance of automated correlation guidance systems. A shadow-generation algorithm was implemented to increase the accuracy of synthetic imagery used to simulate visible, near-infrared, and far-infrared sensors. Initially, a data base was established in which all surfaces were represented by a list of vertices and material codes and arranged according to a scheme of a priori masking priority. Each surface was then clipped against updated clipping polygons representing the silhouette of all previous surfaces that had higher masking priorities as viewed from the position of the light source. The resulting hidden surface was inserted into the data base and flagged as a shadow for gray-scale prediction by the appropriate sensor model. Because each surface is compared to a union of polygons rather than individual surfaces, this algorithm is computationally efficient for use with large data bases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major problems in scene analysis is segmentation of a scene. Various approaches to segmentation use information about edges and/or proximity in pixel values as a basis for decomposition of the picture. In this paper the information that can be obtained regarding the material elements present in the scene is used as the basis for segmentation. Comparison of the results with the other known techniques is also given. A set of probabilities of detection based on various information about materials is then derived.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.