PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The Viking Orbiter and Viking Lander spacecraft have thus far returned several thousand images of Mars from orbit and from the surface. The Orbiter spacecraft are equipped with vidicon systems and the Lander spacecraft utilize facsimile cameras with photosensitive diode arrays. JPL's Image Processing Laboratory processed both Orbiter and Lander imagery in support of mission operations and science analysis. Digital processing included enhancement, geometric projection for a variety of applications, and mosaicking. The Orbiter cameras obtained stereo views of portions of the Martian surface by viewing the same portion of the surface at different viewing angles as the spacecraft passed overhead. Orbiter stereo imagery was processed to produce elevation maps of large portions of the surface. Each Lander spacecraft had two cameras positioned approximately one meter apart that provided stereo coverage of a portion of the field of view around each Lander spacecraft. Lander stereo imagery was processed to produce elevation profiles and isoelevation contours of the surface surrounding each Lander.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A comparison is made of a number of techniques that have been used for the enhancement of atmospherically degraded astronomical images. These include simple deconvolution or inverse filtering, speckle interferometry with and without phase information, and adaptive optical systems. Each approach has its range of applicability in terms of object brightness, angular extent, and structure. The role of the exposure time has been emphasized, and it is pointed out that simple deconvolution has greater potential than has previously been recognized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The JPL Robotics Research Program is developing techniques that might be applicable in the future to planetary missions, to the assembly of large structures in earth orbit or to free swimming underwater vehicles where there is a need for the integration of a computer vision system with mechanical effectors. In each of these applications there is a need for real-time processing and a size limit on the on-board processor. To meet these objectives a robot stereo vision system was developed which maintains the image from the solid state detector television cameras in a dynamic random access memory (RAPID). The vision hardware provides, in effect, real time random access television cameras to the computer. Combining RAPID with the scene analysis algorithms optimised for the hardware provides a ten to twenty fold increase in processing speed over imaging systems which transfer the entire digital image to the computer and use the disc memory for intermediate storage. This short report describes the impact of the vision hardware on the stereo vision system and in turn the impact on the robot system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need exists to find a means of rapidly assessing the trophic state of water bodies which would make it economically feasible to operate extensive systematic surveillance programs of the water resources in the United States. Airborne multispectral sensors show promise as a means of monitoring these resources on a continuous basis. The Image Processing Laboratory at the Jet Propulsion Laboratory (JPL) in conjunction with the Environmental Protection Agency has been involved in water quality studies for the past five years. During this time the primary aim has been to demonstrate the feasibility of applying remotely sensed data to water quality assessment. The experience and technology developed at JPL has now been coalesced into an interactive lake survey program.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Finely detailed striae in astronomical images can be important in formulation of theory. Examples are studies of streamers in the solar corona and of dust tails in comets. In both instances, conventional observations fail to reveal much of the structural detail. Digital image processing has been used at LASL for enhancing these images. The corona images have tremendous variations in film density which must be eliminated before fine striae can be seen. These variations can be removed by means of numerical modeling of their spatial relation to the sun. This model can be thought of as a surface of background film density. In the comet images the overall variation is less severe. Further, the large number of comet images makes it infeasible to model them individually. Hence, an extreme low-pass filter was used to create an image which can be used as the background surface. In both cases, the background surface is divided into the original image pixel-by-pixel. This quotient image is then frequency filtered for edge enhancement or noise control. Nonlinear density transformations are then used to enhance contrast. For both types of images, heretofore unmeasurable details become readily visible for analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A study has been conducted to show the feasibility of implementing an all-digital correlator for missile terminal area guidance. The central thrust of this effort was to establish the hardware requirements for realizing multiple area cross-correlations in real-time using modern digital signal processing techniques. The ultimate objective of the study is the improvement in terminal accuracy of long-range Army missiles through digital area correlation guidance. The approach taken in this study was to determine a computationally efficient algorithm, verify its theoretical performance relative to the conventional multiply and sum correlation procedure, and to estimate the hardware resources necessary to compute the recommended algorithm in real-time. The algorithm recommended for implementation is the high speed digital correlation algorithm which uses the fast Fourier transform (FFT) to minimize the total number of arithmetic computations. The computational equivalence of the high speed correlation algorithm to the conventional multiply and sum approach was demonstrated by example using a digital computer program and simulated two-dimensional test data. A specific all-digital correlator hardware design has been postulated and documented at the block diagram level. This design was used to estimate the number of integrated circuits, as well as the power and space requirements, of an all-digital area correlator.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A simple mathematical interpretation of the properties of ratio images derived from LANDSAT and other sources of multispectral imagery is presented. A spectral signature is defined which is well represented by ratios of pairs of spectral bands and can be related to the problem of clustering and unsupervised learning. Some practical problems arising in the generation of LANDSAT ratio images are considered, and an effective, simple method for reduction of the dynamic range of such images is presented along with digital image processing examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This report briefly reviews a new technique for pictorial encoding. Exploiting the characteristics of the human visual system, a halftone screening procedure is utilized to develop a binary-pixel representation of the image. This binary data is then encoded, enabling much less than 1 bit/pixel storage/communications cost. The proposed strategy is simple to implement, primarily digital, and capable of high speeds. Furthermore, the output image is most compatible with binary display and marking engine technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been a continuing effort to define and develop spread spectrum image transmis-sion systems to provide antijam protection for a television link from small remotely piloted vehicles. The Naval Ocean Systems Center (NOSC), San Diego, California, has conducted studies in this area for some time under the sponsorship of DARPA. The bandwidth compres-sion system resulting from these studies consists of a horizontal cosine transform/vertical differential pulse code modulation process in conjunction with an 8:1 frame rate reduction using 256-by-256 picture element resolution. This paper describes the implementation of this bandwidth compression system in hardware suitable for inclusion in the Integrated Communication and Navigation System (ICNS) for the Army's AQUILA remotely piloted vehicle. The equipment described was built by the Advanced Technology Laboratories of RCA's Government Systems Division, Camden, New Jersey, under contract to NOSC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For many pattern classification and pattern recognition applications, the multispectral data is first used to obtain a classified image (map). This image is then used for different image data extraction and classifi-cation applications. It is important that a particular bandwidth compression method should not result in significant changes in the resulting classification map. In this article the performance of a hybrid encoder (Hadamard/DPCM) in retaining the classification accuracy of the classified image is evaluated. It is shown that using a Bayes supervised classifier the classification accuracy of the bandwidth compressed picture is actually higher than the original picture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The causal predictor methods of DPCM image data compression can be replaced by non-causal or interpolative DPCM compression. A method for realizing interpolative DPCM is discussed, and can be implemented solely with incoherent optics and analog electronics. A digital simulation of this method is presented, with results showing performance comparable to conventional DPCM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most interframe video compression using transforms uses three-dimensional transforms or differential pulse code modulation (DPCM) between successive frames of two-dimensionally transformed video. Conditional replenishment work is nearly all based on DPCM of individual picture elements, although conditional replenishment of transform subpictures can take advantage of the predefined subpictures for addressing and can use the most significant transform vectors for change detection without decoding the compressed image. A conditional replenishment transform video compressor has been simulated in preparation for hardware design. The system operates at a fixed rate and uses compressed frame memories at the transmitter and receiver. Performance is a function of transmission rate and memory capacity and is dependent on the motion content of the compressed scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-Adaptive Picture Sequencing (MAPS), a computationally-efficient contrast-adaptive variable-resolution digital image coding technique, is described. Both compression and decompression involve only integer operations with no multiplies or explicit divides. The compression step requires less than 20 operations per pixel and the decompression step even fewer. MAPS is based on the combination of a simple vision heuristic and a highly nonlinear spatial encoding. The heuristic asserts that the fine detail in an image is noticed primarily when it is sharply defined in contrast while larger more diffuse features are perceived at much lower contrasts. The coding scheme then exploits the spatial redundancy implied by this heuristic to maintain high resolution where sharp definition exists and to reduce resolution elsewhere. Application of MAPS to several imagery types with compressions extending to below 0.2 bits per pixel is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aircraft and spacecraft employing Synthetic Aperture Radar (SAR) as a sensor will either have to perform on-board processing before telemetry or directly transmit the raw radar returns back to a ground station for processing. Although complete or partial on-board processing deservedlyis receiving careful attention, present technology seems to favor ground station processing requiring extremely high data rates to telemeter the raw radar returns. The usual bandwidth compression strategies utilizing redundancies in the scene being transmitted are inapplicable, however, since the radar returns from even adjacent resolution cells are approximately uncorrelated. Therefore, we turned to quantization of the radar returns to achieve some data rate reduction. In this study, we have investigated the effects of quantization by observing output images after one-bit, two-bit, and eight-bit quantization of the raw radar data. By comparison with the original image (ground truth), we can determine the degradation resulting from data or bandwidth reduction by quantization. Furthermore, the telemetry data rate can also affect output picture quality since transmission errors may be functions of the data rates. To investigate this circumstance, we introduced bit errors with probabilities of 2-4 and 2-7. The former being much higher than that expected in "normal" operation, presents a worse case situation, while the latter may be fairly indicative of telemetry links of early space missions using SAR's. We present output images that have been contaminated at these bit error rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A charge injection device (CID) solid-state video sensor/focal plane processor is described which can be used to implement Hadamard transform techniques to reduce video band-width. This device can be operated in two modes. In one mode, the output is a normal video signal. In the second mode, the output is the Hadamard transform of the image. This approach offers an opportunity to relieve the small size and low power requirements imposed by mini-RPV and guided weapon antijam video data link applications by performing the transform processing function of the airborne encoder directly on the image plane. A description of the CID imager, the one- and two-dimensional Hadamard transform implementation of the focal plane processing chip, and preliminary test results are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applying a new algorithm for the Discrete Cosine Transform superior to any published to date, it is shown that an off-the-shelf microprocessor chip, the Am 2901 4-bit bipolar slice, can be employed in a 12-bit configuration to perform TV imagery data compression in real time. The method of compression is the hybrid technique due to Habibi -- DCT along a 32-pixel segment of a TV line and Differential Pulse Code Modulation line to line, thus processing only one eighth of each field at one time. The significance of the new algo-rithm is that it permits an all-digital implementation of a TV data compression system for Remotely Piloted Vehicles and spacecraft using "off-the-shelf" circuitry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Transmission of images over a digital channel in the transform domain can lead to reduced bandwidth requirements. This is a consequence of redundancy reduction as the linear transformation compacts the image energy into a small region. By transmitting the transform components that have the most energy/information, images can be reconstructed at the receiver with negligible degradation in the subjective picture quality and with reduced bit requirements. The criterion for selecting these components is generally based on geometrical zone or magnitude or variance all in the transform domain. Magnitude sampling although adaptive, requires additional bits as their location needs to be specified. Variance criterion, on the other hand, is in general adapted to the average picture statistics, and hence may not be an optimal selection for the specific image being processed. As a compromise between these two, hybrid sampling which considers both magnitude and variance is proposed. This technique is applied to GIRL and MOONSCAPE images which are quantized uniformly to 256 gray levels. Processing is carried out on (16 x16) pixel subimages using discrete transforms such as Haar, Walsh-Hadamard, Hadamard-Haar and discrete cosine. Mean square error (mse) for various data compression ratios utilizing hybrid selection between the original and reconstructed images is computed and is compared with those for the magnitude and for the variance selections. The mse for the hybrid approaches that for the magnitude which shows that the former is an attractive scheme for data compression with significant bit reduction and negligible increase in mse. Various ratios for magnitude-variance selection are being adopted. This may lead to an optimal ratio in terms of bit rate, mse and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rate-distortion theory using the mean squared error criterion is often used to design digital image coding rules. The resulting distortion is, in theory, statistically equivalent to omitting components of the image from transmission. We compare a rate-distortion simulation using the discrete cosine transform to a method which is statistically equivalent to adding uncorrelated random noise to the image. This latter method is based on a PN (pseudo-noise) transform, which is generated from a Hadamard matrix whose core consists of the cyclic shifts of a binary maximum length linear shift register sequence. Visual comparisons of the two approaches are made at the same mean squared error. In all cases, the images encoded using the PN transform method showed superior definition of detail and less geometrical distortion at transform block boundaries than the images encoded using the discrete cosine method. The results of this experiment suggest that image appearance may be improved by designing transform coefficient quantization rules to approximate the effects of additive noise rather than to omit low energy image components, as dictated by conventional rate-distortion theory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A previously-developed block adaptive Differential Pulse Code Modulation (DPCM) procedure has been combined with a buffer feedback technique. The result is an efficient variable rate DPCM algorithm. The new technique is fully adaptive, yet it retains the basic simplicity of DPCM. It utilizes the appropriate quantizer parameters and also assigns the available channel bandwidth according to need as determined by the local image structure. A buffer feedback procedure, previously reported by the authors, has been generalized and was implemented to control the bit rate selection. Examples demonstrate that the algorithm is successful in achieving adaptivity objectives. Although buffer control requires additional hardware, because of its relatively low speed, the impact on overall hardware complexity is negligible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Microfiche Image Transmission System (MITS) is intended to improve access by individuals to personnel records held in the Bureau of Naval Personnel's (BUPERS) Microform Personnel Records System (MPRS). The basic problem is to transmit quickly and economically requested microfiche images from BUPERS' central site in Washington, D.C., to remote sites such as Norfolk, Virginia, and San Diego, California. A preliminary system design for MITS has been completed which encompasses system component specifications as well as manual and man-machine procedures necessary to implement a microfacsimile transmission system. System components were selected as a result of an options analysis study. To reduce development costs the study attempted to identify commercially available components that would meet system design goals. No new component designs were required. Salient design features include laser-beam spinning-mirror scanners and recorders for use at the central and remote sites, a wideband satellite transmission link, and use of a dry-processed silver-halide output film. Personnel requirements for one central site and one remote site operating on three shifts daily include filling five different work positions requiring a total complement of eight people. Cost estimates conducted in conjunction with the options analysis study and preliminary design show that MITS is not now economically competitive with mail or air freight for BUPERS' operation. However, they suggest that technolog-ical advances will reduce M ITS' recurrent costs below those of the other transmission schemes within 10 to 15 years. Since new developments in digital image processing will enhance the system's effectiveness, MITS will remain a promising area of application for innovations in this field. Potential applications for MITS exist Navy-wide as well as with the U.S. Postal Service, Defense Documentation Center, and National Technical Information Service.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid evolution of integrated circuit technology is now impacting digital image processing displays. As a result, new display concepts are now emerging with capabilities and speeds previously unattainable. Examples include, multiple simultaneous displays, large array storage (2048 x 2048), multiple display storage, real-time interactive convolution, interactive zoom and roam, and dynamic image presentation. The basis for these new capabilities is both technology and the human interface to the display. Each is discussed. One implementation of an advanced display system is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Capturing an image using a solid state scanner requires focusing the subject image on to the scanner array with some type of lens. Various kinds of degradations occur including Cos4 law, illumination non-uniformity, dust on the optical surfaces and flaws in the imaging array. The combined effects of all sources of image acquisition errors can result in serious cosmetic defects in the resulting image. Dramatic improvements in image quality can be achieved by multiplying the intensity value of each image pel by a correction factor: () Pout = Pin Pmax CFi Pout = corrected pel value Pin = pel value as captured Pmax = maximum possible pel intensity value CFi = correction factor for the ith position on the scan line The correction factors (CFi) are determined by capturing an image of a "standard white material" and calculating the average white value for each position on the scan line. The correction calculation requires multiply and divide operations which are too complicated to be performed at real time (21 million pels/second) in computer software. Therefore a 2-stage tabular lookup procedure has been implemented in bipolar RAM and PROM hardware. An additional feature of the approach is the possibility of performing any arbitrary intensity transformation by the substitution of a different PROM table.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A system to perform digital analysis of graphical images has been developed and utilized in the analysis of six channel strip chart recordings over the past three years. Silicon linear photo-diode arrays were employed as the image digitizer, simplifying preprocessing techniques in the scanner. The scanner consists of control and logic electronics, light source, lenses, paper transport mechanism and six 128 x 1 photo-diode integrated circuit arrays. The graphics scanner is interfaced to a minicomputer system which includes display, storage, and analysis capabilities. The scanner's preprocessing includes threshold detection of edges and removal of reference grid lines from the digitized image. This system has been used to perform statistical analysis on wideband communications data as recorded on six channel strip chart recordings by the United States Air Force Communications Service at Richards-Gebaur Air Force Base. Several thousand meters of strip chart have been analyzed over the past three years, successfully demonstrating the utilization of solid state scanners and computerized analysis of graphical data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The arrival of the microprocessor on the digital computing scene has created an entirely new philosophy for the electronics industry. In this paper we will be concerned with software structure from a systems standpoint. A number of these structures will be examined in some detail for an application of their conditional characteristics. Three distributive software systems, the Dedicated System, the Non-Dedicated System, and the Hybrid System, will be discussed in a conceptual fashion. Advantages and disadvantages of each will be noted. Finally, the application of distributive microprocessor networks to the specific field of reconnaissance scanning is introduced from a conceptual viewpoint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A digital processor has been designed and built to implement Lockheed's Phase Correlation technique at a rate of 30 correlations per second on 128 x 128 element images digitized to eight bits. Phase Correlation involves taking the inverse Fourier transform of the appropriately filtered phase of the Fourier cross-power spectrum of a pair of images to extract their relative displacement vector. It achieves sub-pixel accuracy with relative insensitivity to scene content, illumination differences and narrow-band noise. The processor, which is designed to accept inputs from a variety of sensors, is built with conventional TTL and MOS components and employs only a moderate amount of parallelism. It uses floating point arithmetic with equal exponents for real and imaginary parts. Multiplications are performed by table lookup. Application areas for the correlator include image velocity sensing, correlation guidance and scene tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The pricing structure of the new "third generation" microprocessors has made multiprocessing economically attractive. Nevertheless, changes are necessary in the classical Von Neuman hierarchy of computer elements in order to implement a parallel CPU (central processing unit) concept. Therefore, an innovative technique is explored in this report that utilizes truly parallel processors in handling arrays of data. The technique uses processors which perform identical operations on different data to multiply computing speed. In this configuration, there is no theoretical upper limit to the number of processors used. An application of an array processor to pictorial pattern recognition is examined. In this example, 108 inexpensive microprocessors are utilized in an array to obtain an equivalent computing speed of 420 MIPS (million instructions per second). The hardware configuration, timing considerations, and software requirements are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Present solid-state infrared staring imagers are characterized by large element-to-element nonuniformities in both dark currents and responsivities. These nonuniformities result in "fixed pattern" noise which can exceed the amplitude of the desired signal by more than 500 times. For solid-state infrared staring imagers to be a viable alternative to other infrared imaging systems, real-time nonuniformity compensation must be developed. This paper presents a real-time digital compensation technique that corrects for the non-uniformities in both dark currents and responsivities of solid-state electro-optical imaging arrays. Experimental results yielded a measured net gain in the signal to fixed pattern noise ratio of 46 dB. Hardware is described and results demonstrated for a 32 by 32 CID (charge injection device) visible imaging array using a simulated infrared image and background. Basic MTI (moving target indication) and target correlation features as well as extrapolation to 128 by 128-element staring arrays are also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In response to the need for more efficient processing of high volume data from intensi-fied detectors, the Kitt Peak National Observatory Panoramic Detector Program designed, built and operated a digital two-dimensional image processor, capable of computation, accumulation and display-windowing autocorrelograms on-line at the 4-meter telescope. The hard-wired digital autocorrelator with its three different memories and its 10 MHz arithmetic unit was used to visually resolve binary stars with separations between .04 and .16 arc-seconds. It became the precursor of a sizable family of online digital image processors developed at Kitt Peak. This paper describes the system design, hardware algorithms and functional aspects of the autocorrelator. It also discusses some instrumental developments that were inspired by the autocorrelator project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have recently assembled a precision CRT display system which uses 16-bit digital-to-analog converters to control its x and y deflections. We have used this hardware to perform spatial warping experiments with digital images in which the desired locations of the pixels are computed according to the warp functions. For moderately distorted images, the cosmetic defects in imagery produced by this process may be acceptable in some of our applications. We show that the polynomials describing the warping functions can be efficiently evaluated by finite difference tables. We discuss the design of microprogrammed controllers, which can calculate the warped coordinates at the same rate that the CRT beam exposes the film by using difference tables.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The principal of SAR image-formation is reviewed in preparation for a discussion of both optical and digital processing techniques. The tilted-plane optical-processing approach is presented as being representative of optical techniques. Since the newer digital approaches can take several forms, three classes of digital processors are examined: direct convolution, frequency multiplexing, and frequency analysis of dechirped data. A subjective listing of the relative merits for both processing media is presented. Both are found to be technically viable. The final choice will depend primarily upon the application requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are a number of possible industrial and scientific applications of nanosecond cineradiographs. While the technology exists to produce closely spaced pulses of X rays for this application, the quality of the time-resolved radiographs is severely limited. The limitations arise from the necessity of using a fluorescent screen to convert the transmitted X rays to light and then using electro-optical imaging systems to gate and to record the images with conventional high-speed cameras. It has been proposed that in addition to the time-resolved images, a conventional multiply-exposed radiograph be obtained. This paper uses simulations to demonstrate that the additional information supplied by the multiply-exposed radiograph can be used to improve the quality of digital image restorations of the time-resolved pictures over what could be achieved with the degraded images alone. Because of the need for image registration and rubber sheet transformations, this problem is one which can best be solved on a digital, as opposed to an optical, computer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image processing is striving to attain the status of a scientific discipline. What has evolved at the moment is a collection of techniques and procedures for manipulating pictorial data, but a cohesive theory of image processing remains to be developed. In its current state, the digital computer plays an important role. It is a powerful tool for developing, refining and testing algorithms. Its effectiveness is dependent, however, on the programming language which is available to the image processing analyst. The language most widely used today, FORTRAN IV, is not well suited as an algorithmic language. This paper explores the possible use of APL in a picture processing environment. It shows that improvements can be made with APL and what those improvements are.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nonlinear image restoration is posed in terms of optimization by iterative computations. The iterative computations are decomposed spatially to develop an algorithm which is sensitive to local image variations. The application of this decomposed local algorithm to signal-dependent noise is discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photography of the retina has long been accepted by ophthalmologists as a method of recording and detecting tissue change for early diagnosis of ocular disease. Although much can be learned by photographs, information in image form can and is lost by an observer. By digitizing the image, using a microdensitometer and minicomputer, photographs can be converted to data which can be machine handled for small changes that the observer can not detect in photographic image form. An advantage of digital analysis of data over direct observational methods is that a great many more patients can be measured, for screening purposes, in much less time and without the costly services of an ophthalmologist. This paper presents a technique which develops the above approach in application to early diagnosis of glaucoma. It makes use of magnified photographic images of the optic nerve head and transforms these images into distributions of density. Measured shifts in the peaks of these distributions become sensitive measures of changing tissue structure in the optic nerve itself and consequently in the early diagnosis of glaucoma.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach to the problem of finding small-sized man-made objects in outdoor scenes, and gives some initial results of using this approach. In many military applications, the objects of interest in low-resolution imagery encompass areas covering less than 10 x 10 pixels. In such situations, there are no d.etailed geometrical features present in objects that may be used for recognizing them in a large field of view. A characteristic often used for finding small objects is the contrast between the objects and background. The contrast-based features work adequately in simple scenes with relatively clean back-ground and absence of clutter. The problem becomes much more difficult when the objects are located in outdoor scenes with real noise (bushes and other natural terrain). We believe that some gross structural properties of man-made objects, for example "blockiness" of object boundary, can serve as strong features in distinguishing them from background. clutter. Our preliminary results on this work look very encouraging and are presented. in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional two-dimensional low-pass filtering of images for wideband noise suppression invariably, if not by definition, degrades the spatial definition of objects within the image. Thus, SNR improvement is achieved at the expense of edge fidelity. We present in this paper a technique which offers improved noise suppression relative to conventional filtering, and at the same time tends to preserve the important high frequency content of the target image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The images produced by an X-ray computed tomography (CT) system are significantly different from those normally encountered in digital image processing. Among these special properties are: large dynamic range, low spatial resolution, and noise which is significantly correlated from pixel to pixel. These properties present unique problems in the display of the image. We describe some techniques used in CT displays and discuss optimum processing techniques for maximizing resolution for a given noise level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high resolution cylindrical scanning multi-axial tomography unit is under development which will be capable of synchronously recording x-ray profile data of sufficient axial range to reconstruct sets of up to 250 contiguous 1 mm thick cross-sections encompassing the intact thorax at rates of 60 sets per second; i. e., up to 15,000 cross-sectional images per second. The practicality of this system depends on the development of a digital processor capable of hundreds of reconstructed cross-sections per second. Using a convolution reconstruction algorithm, input data and intermediate result precision required throughout the algorithm execution have been studied with computer simulation using profile data derived from mathematically simulated test objects and experimental animal data. A prototype design for a highly parallel all-digital hard-ware reconstruction unit has been developed, employing a new generation of digital components. A small prototype section of this design using several of the new components is currently executing 60 million arithmetic operations per second. The full scale version of this high-speed processing unit is projected to reconstruct 500 to 1000 cross-sections per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.