PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Presented here is an efficient facsimile data coding scheme (CSM) which cambines an extended run-length coding technique with a symbol recognition technique. The CSM scheme first partitions the data into run-length regions and symbol regions. The run-length regions are then coded by a modified Interline Coding technique, while the data within the symbol region is further subpartitioned into regions defined as symbols. A prototype symbol library is maintained, and as each new symbol is encountered, it is compared with each element of the library. These comparisons produce a signature for the new symbol. A tolerance threshold is used to evaluate the "goodness" of the comparison. If the tolerance threshold indicates a matching symbol, then only the location and the library address need be transmitted. Otherwise the new symbol is both trans-mitted and placed in the library. For finite sized libraries a scoring system determines which elements of the library are to be replaced by new prototypes. Simulation results are demonstrated for both CCITT and Xerox standard documents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The approaches to achieving data compression when the source is a class of images have generally been variants of either unitary transform encoding or time domain encoding. Various hybrid approaches using DPCM in tandem with unitary transforms have been suggested. However, the problems of picture statistics dependence and error propagation cannot be solved by these approaches because the transformed picture elements form a non-stationary signal class. Naturally, a constant set of DPCM predictor coefficients cannot be optimal for all users. However, a composite non-stationary signal source can be decomposed into simpler subsources if it exhibits certain characteristics. Adaptive Hybrid Picture Coding (AHPC) is considered as a method of extracting these subsources from the composite sources in such a way that the over-all communication problem can be viewed as two different, but connected communication requirements. One requirement is the transmission of a set of sequences that are formed by the predictor coefficients. Each of these sequences forms a subsource. The additional requirement is the transmission of the error sequence. An intermediate fidelity requirement is presented which describes the effects of predictor parameter distortion on the transmission requirements for the error signal. The rate distortion bound on the channel requirements for the transmission of the predictor coefficients and the error signal is determined subject to a dual fidelity criterion. The signal class is a set of one dimensional unitary transformed images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If a picture is oversampled by a factor of N in both x and y directions, the resulting subarray for each original pixel contains up to N2 log2L bits of information, where L is the number of grey levels in each element of the subarray. The original pixel needs to be recorded at only about 8 bits for human viewing so the subarray can carry additional information. For example, a terrain map can be coded to include the x-y coordinates, the terrain height, and a geological descriptor. We describe a simple binary code (L=2) in which each 10 by 10 subarray (N=10) has its weight (the number of ones) constrained to yield 9 decimal descriptors. The code is self-synchronous so that no elaborate positioning techniques are required to read out the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a DPCM technique that uses a fixed-error approach to minimize the loss of information in compression of a picture. The technique uses an initial N-bit quantization of the image and zero-error encoding of the difference signal. It produces no slope overload and a compression ratio of about 4 to 1. We compare the technique to three other encoding schemes, including a new, "one-pass", cosine transform encoder.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A highly efficient, noise immune data redundancy reduction algorithm* for the transmission of facsimile data over digital communication channels is described. A summary of prior work in spatial distance data compression technology (including source encoding, channel encoding, and synchronization coding) is pre-sented to allow comparison of the performance of the DATALOG-developed algorithm used in the Tactical Digital Facsimile program with various other data compression techniques submitted for consideration to the Internation Telegraph and Telephone Consultative Committee (CCITT) as the world standard for digital Group 3 facsimile equipments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an image coding algorithm using spline functions that is competitive with the more conventional orthogonal transform methods at data rates of 1 bit/pixel or less. Spline coding has the added attraction of an optical implementation arising from the fact that least squares image approximations also produces least squares approximations to the image derivatives. A first order spline is used to approximate the proper order derivative of the image whose order is determined by an analysis presented in the paper. The image derivative is then encoded and transmitted to the user who reconstructs the image by a k-1 order integration which can be done optically.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new "Plane of Minimum Variation" (PMV) image preprocessing technique. After this preprocessing a three-dimentional(3 -D) image, while still containing information in temporal direction, can be further processed by 2-D methods. For each current pel, the PMV technique localizes a plane in 3-D image representation. This plane contains those pels from the current pel's neighborhood whose luminance values are close to the luminance of the current pel. Based on parameter V, which is a measure of the "closeness", two realizations of the PMV technique are presented. The results of the simulation show that by using the PMV technique, parameter V can be improved (decreased) by 8-12 dB, compared to the image which is not preprocessed by the PMV technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For telescopes operating at optical wavelengths, the turbulence of the atmosphere limits the resolution of space objects to about one second of arc, although the diffraction limit of the largest telescopes is many times as fine. We discuss an image processing technique that uses interferometer data (the modulus of the Fourier transform) to reconstruct diffraction limited images. Data from a stellar speckle interferometer or from an amplitude interferometer can be used. The processing technique is an iterative method that finds a real, non-negative object that agrees with the Fourier modulus data. For complicated two-dimensional objects, the solutions found by this technique are surprisingly unique. New results are shown for simulated speckle interferometer data having realistic noise present.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Obtaining diffraction limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. Using an "optimal" filtering approach, we have developed a speckle imaging reconstruction method. This method is based on a non-linear integral equation which is solved using principle value decomposition. The method has been implemented on a CDC 7600 for study. The restoration algorithm is discussed and it's performance is illustrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new technique is described for improving transform encoded signal reconstruction by extrapolating missing coefficients in the transform domain. The extrapolation algorithm is very simple and it can easily be incorporated into one-and two-dimensional transform hardware. Examples of signal reconstruction are included for one- and two-dimensional signals. The extrapolation algorithm is especially simple in the case of the Haar transform, due to the global-local properties of the Haar coefficients. The generalized formula is derived for both Haar and Walsh-Hadamard transforms. The incorporation of the spectral extrapolation into a real time bandwidth compression system is also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Two algorithms are developed which address the problem of estimating the magnitude and phase of the optical transfer function associated with a blurred image. The primary focus of the research is on the estimate of the phase of the optical transfer function. Once an estimate of the optical transfer function has been made, the corresponding blurred image is Wiener filtered to estimate the original unblurred image (the object). Results are demonstrated on computer simulated blurs and also on real world blurred imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The technique of x-ray scanning of the head by computed tomography produces an image of the internal structure of both bone and soft tissue, displayed as a matrix of picture points. The display of this image of some 17,200 elements of varying shades of grey produces complex and subtle patterns which cannot be fully recognized by the human visual system. The method reported here uses fundamental statistical properties of the image to yield readily visible changes in CT images which extend diagnostic capabilities. This report is intended to demonstrate that clinically based physicians can directly apply readily available computer techniques to diagnostic problems. A set of image processing algorithms is described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of recognition of objects in images is investigated from the simultaneous viewpoint of image bandwidth compression and automatic target recognition. A hypothetical scenario is suggested in which recognition is implemented on features in the block cosine transform domain which is useful for data compression as well. Useful features from this cosine domain are developed based upon correlation parameters and homogeneity measures which appear to successfully discriminate between natural and man-made objects. The Bhattacharyya feature discriminator is used to provide a 10:1 compression of the feature space for implementation of simple statistical decision surfaces (Gaussian classification).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An adaptive technique for providing effective enhancement of latent fingerprints is presented. The adaptive technique is described and examples are shown, including an enhanced print prepared for a trial exhibit. Image preprocessing considerations are discussed for obtaining optimal enhancement. Results of the adaptive technique are compared to those obtained with the conventional Fourier filtering enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the digital analysis of SEASAT synthetic aperture radar (SAR) imagery of ocean waves, it is desirable to first correct sensor-induced geometric distortions. Chief among these is the nonlinear range distortion due to the presentation of optically-processed SAR imagery in the slant range plane. For the average SEASAT case of 200 incident illumination angle, range distances are severely compressed, leading to inordinately short wave periods in the imagery's range direction. An efficient digital filtering algorithm that accurately corrects for this distortion has been developed; from a set of equally-spaced slant plane digitized samples the algorithm produces a set of equally-spaced ground plane samples using a table look-up filtering procedure. Consequently, customary FFTs, which operate on uniformly sampled data, may be employed in detecting dominant wave periods and directions from the geometrically corrected SAR imagery. Further, by converting imagery collected at differing illumination angles to a common coordinate system, the correction makes possible a useful comparison of spectra obtained from nonoverlapping image regions, thereby increasing the reliability of the detection decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Image Digitizer has been developed for digitization of photographic transparencies for digital image processing. This system uses a linear, 1024 element CCPD array mounted on a linear stage to produce a 1024 x 1024 pixel sampled output. The Image Digitizer has several flexible and unique capabilities. It has continuous zoom magnification over a log range. Picture sample spacing is adjustable from 1.6µm to 16µm, giving a scan format capability from 1.6 to 16mm . The system contains a focus indicator and adjusts to film transmittance by integration time control and source brightness control. In this manner the full 2500:1 dynamic range of the detector is used for film transmittance sampling. The ID has a rear projection viewing display for alignment and scaling of the input transparency. In this paper we will discuss the technology of self-scanned linear detector arrays in image digitization. This will include design parameters, measurement accuracy capabilities and test results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A computer with unique architecture is making previously unexplored techniques feasible for a wide range of applications, including image processing. The Array Processor provides very fast computing power, useful in both instrumentation systems and research. For ex-ample, two-dimensional Fast Fourier Transforms are now being performed on minicomputer systems enhanced by an Array Processor; 512 x 512 real data points are processed in approximately 1.6 seconds. This paper presents an overview of Array Processor hardware and soft ware in an examination of how this performance is achieved. It is designed to help scientists and engineers gain an appreciation of parallel processing and its potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As part of an overall program to develop techniques for the intensification of severely underexposed photographic negatives, a technique involving the use of a scanning electron microscope (SEM) to scan the film and display the electron beam scattering intensity on a CRT was investigated. While the program was curtailed before the technique could be fully evaluated, the ability to enhance underexposed film by the equivalent of 5 stops a factor of 32 was demonstrated. This compares favorably with other techniques being investigated, such as autoradiographic sensitization. The SEM technique has the further advantages of being real-time, maintaining the high resolution of the original negative, and being nondestructive. In addition to the above mentioned application, this technique has promise as a convenient means of scanning photographic images for digitization and signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As shown by L. D. Harmon of Bell Labs, it is difficult to recognize faces when they are shown in a coarse grid of intensity modulated blocks. When spatial frequencies above the basic grid frequency are removed (filtered out), the recognizability improves tremendously. In Nuclear Medicine, we are being presented with pictures of relatively coarse structure. In these, we are looking for "unknowns"--lesions that might exist or unusual shapes of organs. So, it is even more important that the eye does not get distracted by structure (= noise) in the picture which is not inherent with the distribution of radioactivity in the patient. So far, pictures generated by computer from an X-Y array of numbers have had a lot of spatial frequency noise. Continuous Tone Image (CTI) developed at Baird Corporation, is a computer-generated analog display for its gamma camera.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advent of solid-state linear charge-coupled devices (CCD's), as well as the recent developments in high-speed digital-computing elements, has made possible real-time processing of video image information at high data rates. An end-to-end (film transparency in to image out) mechanization of a high-resolution, high-speed film scanning system employing optically butted linear CCD's is described. The system includes a precision film scanner, a digital data transmitter that performs 2:1 differential pulse code modulation (DPCM) data compression as well as data serialization, and a digital data receiver that performs the inverse data compression algorithm. End-to-end system offset and gain correction are performed in real time on a pixel-by-pixel basis, removing objectionable artifacts caused by CCD nonuniformities. The resulting video data stream can either be viewed on a soft copy TV monitor or be read into a hard copy laser recorder. The scan spot size and pitch are less than 3 microns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The military as well as some commercial users of technical manuals are moving toward a universal requirement of microfilmability for these documents. The halftone photographic process presently utilized for illustration purposes is not suitable for microfilming. The employed solution to this problem is the production of a line art illustration by manually tracing the various objects of interest in the photographs. This technique results in substantially increased costs for technical manual development meeting the microfilmability requirements. An approach to simplifying the conversion of continuous tone or halftone photographs to line art has been developed utilizing optical processing techniques. This paper describes the approach, including development of an optimal enhancement filter, the implementation and results of a cost/benefit study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the performance specifications for the Multiple Image Exploitation System (MIES) recently developed at ESL was that it perform a Fourier Transform on a 512x512 16-bit integer image in 10 seconds or less. This paper describes the hardware, software and firmware strategies chosen to achieve this speed, the problems encountered in their implementation, and analysis of the components of the time finally achieved. There are also discussions of the generation and application of interactive filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A nonlinear mathematical model for the human visual system (HVS) is selected as a preprocessing stage for monochrome and color digital image compression. Rate distortion curves and derived power spectra are used to develop coding algorithms in the preprocessed "perceptual space." Black and white image compressions down to .1 bit per pixel are obtained. In addition, color images are compressed to 1 bit per pixel (1/3 bit per pixel per color) with less than 1% mean square error and no visible degradations. Minor distor-tions are incurred with compressions down to 1/4 bit per pixel (1/12 bit per pixel per color). Thus, it appears that the perceptual power spectrum coding technique "puts' the noise where one can not see it. The result is bit rates up to an order of magnitude lower than those previously obtained with comparable quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many ideas have recently been proposed on the Adaptive DPCM (ADPCM) scheme. The most promising of these is a scheme of adaptively changing prediction function. The authors have studied the prediction function characteristics of television (TV) signals and developed a new Adaptive DPCM algorithm called Predictor Adaptive DPCM, which is based on the spatial correlation characteristics of the prediction function. In our algorithm, the prediction function at the coding sample position is decided by calculating and comparing prediction errors given by various prediction functions at the neighboring, already coded, sample position (called the reference sample position). This algorithm achieves 40%-16% reduction of the transmission bit rate compared to conventional DPCM schemes. This paper describes the spatial correlation characteristics of the prediction function, an algorithm for adaptively changing prediction functions, selection of the optimum reference sample position and selection of the optimum prediction function combination. Computer simulation results of the new algorithm using the moving videotelephone signals, and comparisons between our algorithm and conventional DPCM schemes are also described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Application of image compression to a given transmission problem involves trades between such factors as output quality, compression ratio and complexity of implementation. One specific case of interest for this discussion is "trunk-line" transmission, where a wide-band link is available and the overriding concern is preservation of image quality. In such a case complexity of implementation is of lesser concern, allowing the use of adaptive transform coding, but of sufficient concern that the adaptivity is constrained--coding bit maps are selected from a predetermined set of reasonable size, rather than being generated on a per image basis.(1) The question which arises is how to forward such data over a local link. Such a link will be of low or very low data rate, the coding and decoding must be as simple as is possible, quality is of lesser importance, though only because such is forced by low rate transmission and it can be assumed that no modification of the trunk-line transmission can be entertained solely to support local retransmission. This paper describes a method of approach to this problem with a technique called bit truncation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image transform coding is a technique whereby a sampled image is divided into blocks. A two-dimensional discrete transform of each block is taken, and the resulting transform coefficients are coded. Coding of the transform coefficients requires their quantization and, consequently, a model for the transform coefficient variances that is based in turn on a correlation model for the image blocks. In the proposed correlation model each block of image data is formed by an arbitrary left and right matrix multiplication of a stationary white matrix. One consequence of this correlation model is that the transform coefficient variances are product separable in row and column indexes. The product separable model for the transform coefficient variances forms the basis of a transform coding algorithm. The algorithm is described and tested on real sampled images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Because of the interlaced television scan, the two fields that form an interlaced video frame are generated 1/60 of a second apart. If the two fields are compressed independently, the correlation between adjacent lines is unused. The transmission rate can be reduced by using a field memory to form an interlaced frame. Four test images were processed as fields and as interlaced frames, using both theoretical and experimental compression designs. For comparable mean-square error and subjective appearance, field compression requires about one-half bit per sample more than frame compression. However, the overall transmission rate -- the number of bits per image times the number of images per second -- is more meaningful than the number of bits per sample. When transform compression at low transmission rates merges the adjacent lines, frame compression becomes similar to field repeating, and the memory can be reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Machines that have been developed to determine the topography of a region from stereo-pair photographs are subject to correlation errors resulting in incorrect elevation calculations. In this paper, the determination of the location of the peak of the correlation function of two similar arrays of picture elements is modelled as a parameter estimation problem. The Cramer-Rao lower bound is considered as a possible tool for predicting the magnitude of correlation errors. Preliminary simulation results are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several imaging systems in laser fusion, a-beam fusion, and astronomy employ a Fresnel zone plate (FZP) as a coded aperture. The recent development of uniformly redundant arrays (URAs) promises several improvements in these systems. The first advantage of the URA is the fact that its modulation transfer function (MTF) is the same as the MTF of a single pinhole, whereas the MTF of an FZP is an erratic function including some small values. This means that if inverse filtering is used, the URA will be less susceptible to noise. If a correlation analysis is used, the FZP will produce artifacts whereas the URA has no artifacts (assuming planar sources). Both the FZP and URA originated from functions which had flat MTFs. However, practical considerations in the implementation of the FZP detracted from its good characteristics whereas the URA was only mildly affected. The second advantage of the URA is that it better utilizes the available detector area. With the FZP, the aperture should be smaller than the detector in order to maintain the full angular resolution corresponding to the thinnest zone. The cyclic nature of the URA allows one to mosaic it in such a way that the entire detector area collects photons from all of the sources within the field of view while maintaining the full angular resolution. If the FZP is as large (or larger) than the detector, all parts of the source will not be resolved with the same resolution. The FZP does have some advantages, in particular its radial symmetry eases the alignment problem; it has a convenient optical decoding method; and higher diffraction order reconstruction might provide better spatial resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When performing psychovisual experiments, the averaged results from a multiplicity of trials are a more reliable measure of image quality than the results from a single trial, e.g., such as those obtained from conventional three-bar measurements. This paper describes the use of multiple targets as a technique for image evaluation of an optical instrument such as a microscope. On the basis of experiments conducted at Perkin-Elmer, related to specialized photographic test targets, the Landolt Ring target array was chosen as a basic image evaluation probe signal. The targets, organized in groups having different sizes, modulations, and magnifications, are in circular arrays with the opening in the Landolt Rings randomly placed in four orientations. An observer is asked to identify the orientation of the opening. After the identification, his responses are scored against the known orientations, and the curves of probability of correct orientation are plotted as a function of target size. This function forms a quantitative measure for evaluating the performance of optical instruments. The theoretical analysis portion of this paper is con-cerned with the development of a mathematical model by which optical instrument performance may be ranked. By means of a series of psychovisual experiments, Human Factors Research, Incorporated, has independently determined curves for probability of correct orientation for a number of observers. There is generally good agreement between empirical data and model predictions, approximately 90% for both unaided eye viewing and microscope viewing of test target arrays.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.