PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
TDI image sensors for advanced reconnaissance cameras preferably have pixels that are both small in size and high in performance. The high performance refers especially to primary responsivity (quantum efficiency convolved over the optical bandwidth of the camera), charge handling capacity and MTF, but other properties such as dark current, readout speed and noise are also very important. These image sensors are typically assembled into focal planes having 6000-15,000 pixels in the electronic scan direction. This paper describes one such array and reports on its performance in strobe-light bench tests. The pixel size is 15um square. In order to store a maximum amount of charge in such a small pixel, we have used 4-phase CCD registers and implanted channel stops to achieve a 40% charge storage area as has been done before in an array with larger pixels. The gate dielectric structure has an equivalent oxide thickness of only 600A providing a pixel charge storage capacity of 1.5X106 el with 10V clocks. (Whereas a television sensor can give a high quality signal for commercial television from pixels that saturate near 1.5X105 el, reconnaissance cameras often require significantly higher capacity.) In order to achieve high responsivity, the optimum thickness of polysilicon, silicon nitride and silicon dioxide were determined by optical modeling for a 5500K blackbody illumination spectrum and for the spectral range of wavelengths greater than 500nm. Then, using a wafer-stepper with very high alignment accuracy, it was possible to achieve a CCD gate structure with overlaps of the order of 0.5um so that 85% of the pixel area has only one layer of polysilicon. With proper control of the thickness and quality of each layer, it was possible to achieve quantum efficiencies that averaged greater than 50% from 500 to 900nm. This high responsivity is very close to the predicted value of the model; this value was reduced from that of older designs because of the new constraint on dielectric thickness. High MTF in the TDI direction was achieved even though most of the signal charge in this device is stored at the silicon-silicon dioxide surface. The Nyquist MTF for wavelengths 4600nm is greater than 57%. This high value is attributed to the good charge transfer efficiency that is possible with a buried channel structure even when charge is stored at the surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The acquisition of range information allows real-world scenes to be manipulated in ways traditionally restricted to computer-generated images. This paper will describe lighting, focus, and other changes upon real scenes, describe some of the implications of range sensing to efficient coding of image sequences, and outline some new research into passive range sensing cameras which use focus information to infer distance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes improvements to the triangulation range-sensing technique. The improvements greatly increase the range-data collection rate, while simultaneously providing for improved data in-tegrity and a low-cost implementation. These benefits are achieved by using some unique abilities of the Charge Injection Device (CID) image sensor. A laser-based investigation system has been built around a personal computer and used to evaluate the effectiveness of the improvements. Range data results are shown for the proposed application of printed-circuit board inspection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A test station was developed to evaluate the TFEL edge emitter array in various imaging applications. It consists of 400-dpi and 20-dpi resolution edge emitter array test samples, a sample mount that accepts both kinds of edge emitters, and an electronic drive system. The drive system provides variable drive frequency and excitation voltage, internal test patterns, and an option for external data input. As an example of its capabilities, electrophotographic printing samples are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a mathematical basis for establishing achievable performance levels for multisensor electronic vision systems. A random process model of the multisensor scene environment is developed. The concept of feature space and its importance in the context of this model is presented. A set of complexity metrics used to measure the difficulty of an electronic vision task in a given scene environment is developed and presented. These metrics are based on the feature space used for the electronic vision task and the a priori knowledge of scene truth. Several applications of complexity metrics to the analysis of electronic vision systems are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a methodology for estimating the silhouette of a target from a time-sequence of forward-looking infrared (FLIR) imagery recorded at video rates. The major problems associated with this type of imagery are noise and frame-to-frame misregistration. We achieve registration and noise smoothing by a combination of spatial and temporal processing. Edge preserving smoothing is applied to individual frames prior to gray level thresholding. A new algorithm for finding the optimum threshold is presented. The resulting sequence of binary image frames is registered by cross-correlating the 1-D x and y projections of each frame. The computationally efficient 1-D registration technique is just as accurate as a much slower 2-D correlation method. Temporal smoothing is realized by using a binary median filter to process frame sequences. We show that binary temporal median filtering is comparable in performance with gray level median processing followed by thresholding. The obvious advantage of binary processing lies in its potential for real time processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Motivated by the correspondence problem in computer vision, we take a theoretical look at the topological invariants of images (i.e., in-variants under arbitrary distortion). For monochrome images, these are the intensity contour portraits, and they have a simple structure: isolated points, nested circles (closed curves topologically equivalent to circles) and (topological) figure-eights. The isolated points are maxima and minima, while the figure-eights are contours through saddle points. The circles are contours through points with nonzero gradient. The nesting of the contours is represented as a binary tree structure. As the picture is varied, whether by adding noise, blurring, or some other smooth change, the tree structure of the picture usually stays constant, but sometimes undergoes abrupt changes. These "bifurcations" are classified into only 2 basic types: saddle-node, and saddle connection. The results are based on genericity assumptions i.e., they are true for almost all pictures, thus excluding pathological cases. This leads to the conclusion that based on topological constraints alone, correspondence within a smooth object can be determined at best only at a sparse set of points. If we allow color images, the situation changes, and if there is enough color variation, correspondence can be determined for all points on a "generic" smooth object.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In February 1987, after $3 million and five years of development by the Jet Propulsion Laboratory and the Perkin-Elmer Company, the U.S. National Archives and Records Administration installed a charge coupled device (CCD) camera and image processing system to monitor the physical condition of the U.S. Constitution. The system, called the "Charters Monitoring System," is capable of comparing the pattern of one image with another to facilitate the detection of any changes that may be occurring on the document. To assure that the difference between images are not due to variations in illumination, CCD response, or position of the camera, the Charters Monitoring System must precisely control illumination, (including its intensity, alignment, and stability), CCD (including temperature, dark current, charge transfer control, gain correction, and integration time), camera focus, resolution, registration, and consistent analog to digital conversion. The system offers examples of precision and accuracy problems in electronic imaging and the importance of repeatable image capture relative to quality control of electronically produced images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our previous work on image processing with our real time TV rate image processing system (RAPAC) has been concerned with road vehicle tracking through junctions'. This paper describes various techniques which have been used for classification of road traffic while the vehicles are in motion. The techniques vary from simple size deter-mination corrected for changing vehicle perspective, to outline analyses of moving targets. As in all this work, emphasis has been placed on algorithmic techniques which allow realisation compatable with the speed and pipeline architecture requirements of RAPAC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If a real-time TV image processor has the task of separating moving objects from the static background in an image then this task can be performed most simply by an image subtraction if the static background is known a priori. If the background is varying in an unpredictable manner and is continually being obscured by the moving objects the task of maintaining an image which is representative of the static background is made difficult. This paper addresses the problems associated with the generation and maintainence of a back-ground image within a real-time TV image processor under varying ambient conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are a number of applications that necessitate the transmission of real-time images over a narrow bandwidth channel. These applications must utilize effective data compression algorithms in order to achieve their goals. Scientists have struggled to make quality video images over narrow channels utilizing a number of techniques. Many approaches tend to dissect images, exploiting both the temporal and spatial aspects of input sequences, and then to encode the transformed sections in a convenient and efficient approach. The Pyramid Algorithm performs spatial spectral decomposition so that high frequency information can be separated from its lower frequency components. This spatial pyramid transform is integrated into a system which utilizes both spatial and temporal correlation of real-time video sequences; the result is a robust system configuration that meets the required compression. The best acceptable image, given a specific data rate, is achieved by adaptively selecting either a spatial or a temporal algorithm on a pixel-by-pixel basis. Aspects of moving sequences can be classified so as to choose the best algorithm to minimize visible disturbances for that particular class.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple resolution analysis of images is a current trend in computer vision. In most cases, only spatial resolution has been considered. However, image resolution has an additional aspect: grey level, or color, resolution. Color resolution has traditionally been considered in the area of computer graphics. By defining a suitable measure for the comparison of images, changes in resolution can be treated with the same tools as changes in color resolution. A grey tone image, for example, can be compared with a half-tone image having only two colors (black and white), but of higher spatial resolution. An important application can be in pyramids, one of the most commonly used multiple (spatial) resolution schemes, where this approach provides a tool to change the color resolution as well. Increasing color resolution while reducing spatial resolution to retain more image details and prevent aliasing is an example of the possibility to find optimal combinations of resolution reduction, spatial and color, to best fit an application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for a compression system to significantly reduce the size of high resolution gray-scale and full-color image (picture) files, which can range from 100 Kbytes to over 3 Mbytes, stimulated an extensive research program that resulted in the development of new image compression systems. These new compression systems, based on well established compression schemes, can reduce image file sizes by a factor of more that 20:1 with only minor detectable image degradation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a method for coding color stereo pair images for display on non-interlaced displays. The display frame buffer has a single frame memory and a single output lookup table. Left and right views are coded together as one picture, such that a pixel in the main frame memory is an index into two colormaps. Switching between left and right views is accomplished by changing the color lookup table during each vertical retrace. The color stereo pair can then be viewed by the use of a polarized display with passive glasses. Such a coding scheme compares favorably with systems that use alternate picture lines for left and right views, or with systems that divide the single bitmap in half.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe and illustrate a useful optical/electronic imaging technique that employs real-time holography and the phenomenon of optical phase conjugation. The technique is developed as method for coherent light microscopy in which images are produced which show gradients in optical phase retardation. The technique can also be used to produce images which contain only the moving elements of a phase object. We illustrate both applications of the techniques, the first with phase-gradient micrographs of common biological objects, and the second with images of a microscopic subject in which only elements characterized by intrinsic motions appear.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the physical and psychophysical effects of digital sampling. In particular, we study how spatial sampling (the geometry of the pixel array) and luminance sampling (the distribution of gray levels) affect the image and its perception. Taking the Thin-Film Transistor Liquid Crystal Display (TFT/LCD) as our model, we have simulated four candidate pixel geometries on a color graphics display. We have made psychophysical measurements of visual detectability, reaction time, and rated quality as a function of pixel geometry and grayscale. Our major findings are that (1) the spatial geometry of the sampling array exerts a major influence on visual performance, (2) for suprathreshold judgments of image quality, asymptotic performance is reached with as few as 2-3 bits of grayscale, and (3) the effect of grayscale on performance depends on exposure duration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional image quality metrics in photography evolved in the context of so called continuous tone images. In electronic imaging there is now an ever increasing need to develop metrics which apply to (output) images which are quantized, as from laser writers or digital halftone schemes. Fortunately the most robust of the conventional metrics are based on or compatible with the concept of the quantized nature of light (input), and extending these to the case where both input and output are by definition discrete is usually straightforward. A survey is given of these metrics, with examples from recent work on laser writer hardcopy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses an application of linear systems concepts to the image quality characterization of flat-panel display devices. A computational model of flat-panel pixel structure is discussed, along with several examples illustrating the use of the model in common flat-panel design issues. The results of this work indicate that linear systems analysis provides useful tools for the objective specification of flat-panel image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of a product line of alphanumeric color displays involves the consideration of both economic and perceptual factors. In this paper we focus on the perceptual analysis of color monitors as an aid to the design process. Many image display systems can be evaluated through Just-Noticeable-Difference (JND) analysis. The virtue of this analysis is that it converts the objective parameters that describe the display into units that quantify their visual impact. An example application of this analysis is the determination of the number of phosphor triads needed for a color alphanumeric CRT display. If too few phosphor triads are used in the display, then the visibility of the characters is poor, whereas if too many are used, then the cost of the CRT becomes prohibitively high. By choosing the number of phosphor triads that just meets the visual system's requirements for visibility of the information, the display will meet its performance objective without incurring unnecessary cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Several techniques have been developed and successfully used to measure the imaging quality of cathode ray tubes but, unfortunately, there are no generally accepted techniques for measuring the imaging quality of discrete element displays. This tutorial style paper considers the historical and state-of-the-art aspects of both raster-scanned type displays and discrete-element type displays in an attempt to delineate the metrics used to specify the required static and dynamic characteristics of electro-optical displays. At the theoretical level as well as the practical level, there seems to be only a few metrics which are not applicable to both types of displays. Therefore, the basic recommendation is that a technology-independent methodology be pursued for specifying and measuring the imaging quality of electro-optical displays. The basis for such a methodology is proposed which uses empirical, psychophysical and direct techniques, or a combination thereof, for the metrics and measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automated measurement techniques have been developed to determine the resolution of any shadow mask CRT-observer configuration. It is based on a linear systems approach which determines the brightness profile that an observer sees at a given distance from the CRT. From the profile, a modulation depth is calculated which is related to visual resolution. Given the limits for visual resolution, the results of the linear systems analysis and measurements can provide design parameters for available CRTs which include shadow mask pitch, spot size, spatial frequency of displayed patterns, observer distance, and color.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Contrast-detail analysis, widely exploited in medical image quality assessment, attempts to identify a limit to visual perceptual performance in the context of detectability of space-occupying lesions in homo-geneous background. This review of the technique includes i) a discussion of historical and contemporary luminance difference detection theories, ii) a critique of experimental methodologies, an examination of iii) inherent experimental precision and iv) analytical techniques, and v) a summation of merits and disadvantages of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is common knowledge that the human engineering of displays is equally important to electro-optical or software engineering. To make human engineering a routine part of display design we must have quantitative rules of thumb describing the performance of the human visual system. For colour visual psychophysics is the scientific discipline that creates such measurements. Some aspects of colour psychophysics are well understood, mostly concerned with very low level processes in the visual system. This paper describes where they can be useful applied in display design. It also discusses measurements needed for human performance to play a greater role in display design for a wider range of tasks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital devices for display or printing of images may be characterized by their ability to generate spots centered at points on a fixed spatial lattice. Each spot may be assigned one of a finite number of color values. For reasons of economy in hardware design and data representation, it is desirable to use as small a set of different color values as possible. We have investigated the implementation of algorithms for choosing this set of output colors in alternate color spaces chosen on a perceptual basis. These algorithms utilize quantizers that range in complexity from being separable, uniform, and image independent, to being non-separable, nonuniform, and image dependent. Color input and output devices each have associated with them a particular palette of colors. The location of these colors in standard color spaces is a function of the physics of the device, and can in general only be determined by direct measurement. Effective use of algorithms for printing and display requires careful calibration of both input and output devices to assure accurate transformation between color spaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ideal color photograph is one that captures what we see. This implies that photographic film should record human color sensations. Ordinary photography responds to the amount of light coming from the world in front of the camera. There is a large variety of scenes that have a greater range of light intensities than can be reproduced on a print. These scenes can be printed by capturing the data fromi the world, calculating color sensations, and writing those color sensations on a print.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GLHS - a generalized lightness, hue, and saturation color model is a generalization of several color models which are often used in computer graphics [5] 1. It enables the realization of different models as special cases of the generalized model by specifying the values of two parameters (referred to as weights). A uniform color space is a color model in which distances between points adequately represent perceptual distances between the colors represented by these points. We discuss the search for a particular assignment of the weights in the GLHS model that will yield a special case that is as close to uniformity as possible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the assessment of human psychophysical and perceptual response to displays, fast and accurate radiometric measurements of display devices are needed. This paper reviews design criteria for spectroradiometers to be used for real-time colour CRT characterization, describing a possible system based on a 512-element photodiode array, a high-throughput spectrograph, and a microcomputer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Like other aspects of color vision, interactions between the visual pathways of the long-wave-sensitive (LWS) and the middle-wave-sensitive (MWS) photoreceptors can no longer be regarded only as functions of wavelength. The results of color matching, detection, discrimination and cancellation procedures may also vary with spatial and/or temporal properties of the stimulus. In some ways, the spatial and temporal stimulus dimensions have analogous effects on color perception. For example, the (opponent-color) responses to isoluminant, red/green, sine-wave stimuli are low-pass in both spatial and temporal frequency domains, whereas the (luminous) responses to monochromatic or achromatic stimuli emphasize higher frequencies in both domains. But spatial patterns can have properties that temporal waveforms cannot, involving orientation or symmetry.Thus the simplest model for spatial phase relations--a linear filter with zero phase-shift--is forbidden in the temporal case. This paper emphasizes the properties of 3-way (spatiotemporal-chromatic) interactions, and discusses the role of separability in modelling these interactions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The NTSC standard for color television codes the chrominance signals at a lower spatial resolution than it codes the luminance signal. These differential resolutions result in a smearing of the colors in the scene relative to the edges that define objects, but television viewers are rarely aware of this degradation of the image because the human visual system also codes chrominance (i.e. hue) at a lower spatial resolution than it codes luminance (i.e. edges). Given the resolution difference for chrominance and luminance edges, a model of visual perception must explain why human observers do not perceive the color of objects flowing beyond the luminance edges of those objects. The internal chromatic aspects of objects, which may be determined at chrominance object-boundaries, may be constrained by the perceived spatial luminance boundaries of those objects. In the experiments that we will describe, a spatial chrominance and luminance boundary is prevented from moving on the viewer's retina by moving it image synchronously with eye movements. When the edge is stabilized on the retina, the appearance of the image depends upon the enclosing boundaries that are not stabilized on the retina. We have found that the appearance of the image, independently of the image's actual spatial energy distribution on the retina affects the ability of the human visual system to process information. For example, the viewer's flicker sensitivity depends upon the perceived color of the image and not its actual spectral energy distribution. The same is true for the perceived color of a small spot imaged on the stabilized fields.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study of luminance and chrominance contrast comes from a need to solve a display problem associated with visual clutter. Tactical display systems often call for high densities of background, vehicular track, and alphanumeric information. Luminance and chrominance contrast color difference techniques offer an approach to decluttering information display through the psychophysics of color difference. To apply a model of color perception space for light emitting display devices, a measure of photometric luminance, Y, is employed in place of a psychophysical measure of achromatic surface lightness, L*. The CIE 1976 UCS coordinate system is used. A measure of color difference employing Y, u', v' is correlated with reading speed of three, four, and five-element character strings. The correlations between color difference, AE, and reading speed are high with color difference providing a significant but lesser contribution to reading speed than character type and string length. Color difference is metered into image display generation to achieve visual declutter and improved information extraction efficiency as measured by reading speed. This study shows that luminance and chrominance contrast, in excess of that needed to achieve legibility and visual declutter, does not contribute further to performance improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The problem of efficient adaptive coding of color images for computer display is investigated. Several modifications of current algorithms are proposed that allow for the introduction of local image characteristics. Spatial, temporal and semantic features of images are used to generate error masks. These masks are used to bias the statistics of a histogram used by a vector quantizer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Television systems that display moving images can be designed to have a temporal response as a function of spatial frequency that matches the performances of the visual system. These designs can improve the performance of cameras, signal processing, signal transmission, computer graphic image generation and image displays. The design analysis, based on psychophysical measurements in luminance and color, must include both conditions of fixation and of tracking moving objects in the image by the viewer. Bandwidth reduced systems can be designed using this analysis which produce good perceived sharpness of moving objects with minimum image artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.