PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The concept of print quality is elusive, since it depends on objective measurable quantities such as contrast, graininess, etc., but also on the subjective appreciation of potential observers. It is far from obvious that print quality (PQ) can be defined in terms of good or bad, so that every one will agree on this definition, and the question of its measurement, with objective and subjective measures, remains open. In this Communication, we would first like to propose a set of fundamental questions related to the definition and measurement of PQ. Specifically, we are interested in the definition of PQ in terms of quality concept and quality criteria, on the minimal dimension space of PQ, and on the functional relations that should be satisfied by the metrics of PQ. In the second part, we focus on the simpler case of print mottle and try to answer some of these questions. We show that wavelet transforms can be used to obtain a measure of PQ that correlates very well with the subjective evaluation of observers and use this measure to discuss the functional form of a metric of Print Quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image evaluation schemes must fulfill both objective and subjective requirements. Objective image quality evaluation models are often preferred over subjective quality evaluation, because of their fastness and cost-effectiveness. However, the correlation between subjective and objective estimations is often poor. One of the key reasons for this is that it is not known what image features subjects use when they evaluate image quality. We have studied subjective image quality evaluation in the case of image sharpness. We used an Interpretation-based Quality (IBQ) approach, which combines both qualitative and quantitative approaches to probe the observer's quality experience. Here we examine how naive subjects experienced and classified natural images, whose sharpness was changing. Together the psychometric and qualitative information obtained allows the correlation of quantitative evaluation data with its underlying subjective attribute sets. This offers guidelines to product designers and developers who are responsible for image quality. Combining these methods makes the end-user experience approachable and offers new ways to improve objective image quality evaluation schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of the study is to test both customer image quality rating (subjective image quality) and physical measurement of user behavior (eye movements tracking) to find customer satisfaction differences in imaging technologies. Methodological aim is to find out whether eye movements could be quantitatively used in image quality preference studies. In general, we want to map objective or physically measurable image quality to subjective evaluations and eye movement data. We conducted a series of image quality tests, in which the test subjects evaluated image quality while we recorded their eye movements. Results show that eye movement parameters consistently change according to the instructions given to the user, and according to physical image quality, e.g. saccade duration increased with increasing blur. Results indicate that eye movement tracking could be used to differentiate image quality evaluation strategies that the users have. Results also show that eye movements would help mapping between technological and subjective image quality. Furthermore, these results give some empirical emphasis to top-down perception processes in image quality perception and evaluation by showing differences between perceptual processes in situations when cognitive task varies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigated four different image sharpness enhancement methods. Two methods applied standard sharpening filters (Sharpen and Sharpen More) in PhotoShop and the other two were based on adjustment of the image power spectrum using the human visual contrast sensitivity function. A psychophysical experiment was conducted with 25 observers, the results of which are presented and discussed. Five conclusions are drawn from this experiment: (1) Performance of the sharpening methods; (2) Image dependence; (3) Influence of two different colour spaces on sharpness manipulation; (4) Correlation between perceived image sharpness and image preference; and (5) Effect of image sharpness enhancement on the image power spectrum.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At high compression ratios, the current lossy compression algorithms introduce distortions that are generally exploited by the No-Reference quality assessment. For JPEG-2000 compressed images, the blurring and ringing effects cause the principal embarrassment for a human observer. However, the Human Visual System does not carry out a systematic and local research of these impairments in the whole image, but rather, it identifies some regions of interest for judging the perceptual quality. In this paper, we propose to use both of these distortions (ringing and blurring effects), locally weighted by an importance map generated by a region-based attention model, to design a new reference free quality metric for JPEG-2000 compressed images. For the blurring effect, the impairment measure depends on spatial information contained in the whole image while, for the ringing effect, only the local information localized around strong edges is used. To predict the regions in the scene that potentially attract the human attention, a stage of the proposed metric consists to generate an importance map issued from a region-based attention model, defined by Osberger et al [1]. First, explicit regions are obtained by color image segmentation. The segmented image is then analyzed by different factors, known to influence the human attention. The produced importance map is finally used to locally weight each distortion measure. The predicted scores have been compared on one hand, to the subjective scores and on other hand, to previous results, only based on the artefact measurement. This comparative study demonstrates the efficiency of the proposed quality metric.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present comparison of three subjective testing methods: the double stimulus continuous quality scale (DSCQS) method, the single stimulus continuous quality evaluation (SSCQE) method and the absolute category rating (ACR) method. The DSCQS method was used for validate objective models in the VQEG Phase II FRTV test. The SSCQE method is chosen to be used in the VQEG RRTV test. The ACR method is chosen to be used in the VQEG Multimedia test. Since a different subjective test method is used in each test, analyses of the three methods will provide helpful information in understanding human perception of video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Judgments of complex images differ from those of uniform color samples in several important respects. One such difference is that a complex image is formed of a large number of discrete color elements. Observer judgments are based not on assessment of each discrete element but of a much smaller number of salient features. The judgment process can be considered as the selection of such features followed by the judgment of particular quality attributes for these features. Modeling the judgment process thus requires a set of well-defined quality attributes together with a methodology for the selection of salient features and their relative importance. In this project, a method of selecting colors within an image was considered. A number of measurement locations within a complex image were selected, and the color of these locations was measured on a series of reproductions. The reproductions were judged by a panel of expert observers for their match to a proof, using a category scaling of several image quality attributes. By comparing the measured color differences with the visual judgments it was possible to determine which locations carried the greatest weight in the assessments. It was also possible to make a limited prediction of the visual judgments from the measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image quality assessment plays a major role in many image processing applications. Although much effort has been made in recent years towards the development of quantitative measures, the relevant literature does not include many papers that have produced accomplished results. Ideally, a useful measure should be easy to compute, independent of viewing distance, and able to quantify all types of image distortions. In this paper, we will compare three full-reference full-color image quality measures (M-DFT, M-DWT, and M-DCT). Assume the size of a given image is nxn. The transform (DFT, DWT, or DCT) is applied to the luminance layer of the original and degraded images. The transform coefficients are then divided into four bands, and the following operations are performed for each band: (a) obtain the magnitudes Moi, i=1,..., (nxn/4) of original transform coefficients, (b) obtain the magnitudes Mdi, i=1,..., (nxn/4) of degraded transform coefficients, (c) compute the absolute value of the differences: |Moi-Mdi|, i=1,..., (nxn/4), and (d) compute the standard deviation of the differences. Finally, the mean of the four standard deviations is obtained to produce a single value representing the overall quality of the degraded image. In our experiments, we have used five degradation types, and five degradation levels. The three proposed full-reference measures outperform the Peak-Signal-to-Noise Ratio (PSNR), and two state-of-the-art metrics Q and MSSIM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This study investigated the effect of the chromaticity and intensity of the ambient illumination on the adapted white point of a homogeneous image (i.e. the chromaticity that is perceived as achromatic) and on the optimal white point of natural images (i.e. the white point with the most preferred color rendering). It was found that the adapted white point and the optimal white point shift towards the chromaticity of the ambient illumination. The effect of illuminant color was approximately 2.5 times larger for the adapted white point than for the optimal white point. The intensity of the ambient illumination had no effect on the adapted white point and the optimal white point, except for images with face content. In agreement with previous studies, the optimal white point was found to strongly depend on image content. The results indicate that the optimal color rendering of natural images is a complex relation of image content and ambient illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, image noise characterization based on digital camera RAW data is studied and three different imaging technologies are compared. The three digital cameras used are Canon EOS D30 with CMOS sensor, Nikon D70 with CCD sensor and Sigma SD10 with Foveon X3 Pro 10M CMOS sensor. Due to different imaging sensor constructions, these cameras have rather different noise characteristics. The applicability of different analysis methods to these different sensor types is also studied. Digital image has several different noise sources. Separating these from each other, if possible, helps to improve image quality by reducing or even eliminating some noise components.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Partial Differential Equation (PDE) based, non-linear diffusion approaches are an effective way to denoise the images. In this paper, the work is extended to include anisotropic diffusion, where the diffusivity is a tensor valued function, which can be adapted to local edge orientation. This allows smoothing along the edges, but not perpendicular to it. The diffusion tensor is a function of differential structure of the evolving image itself. Such a feedback leads to nonlinear diffusion filters. It shows improved performance in the presence of noise. The original anisotropic diffusion algorithm updates each point based on four nearest-neighbor differences, the progress of diffusion results in improved edges. In the proposed method the edges are better preserved because diffusion is controlled by the gray level differences of diagonal neighbors in addition to 4 nearest neighbors using coupled PDF formulation. The proposed algorithm gives excellent results for MRI images, Biomedical images and Fingerprint images with noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
X-ray film systems have been widely used for a diagnosis of various diseases since a long time ago. In recent years,
many kinds of displays and recording systems for X-ray medical images have been used including inkjet printer, silver
halide film, CRT and LCD, by the development of the digital X-ray image capturing systems. In this paper, image
quality of X-ray images displayed onto high accurate monochrome CRT and LCD monitors are analyzed and compared.
Images recorded on the exclusive film and coated paper by inkjet printer and the wet type and dry type photo printers
using a silver halide material are also analyzed and compared. The modified Gan's method is introduced to calculate the
MTF (Modulation Transfer Function) from the knife ESF (edge spread function). The results show that the MTFs of the
inkjet image on the transparency and the wet type silver halide film image have fairly similar and good response in
comparison with the inkjet image on the coated paper and the dry type silver halide film. It is also shown that the CRT
has the worse response over the spatial frequency range. It was well correlated between the MTF and observer rating
value. From here, we consider the proposed method is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We develop a comprehensive procedure for characterizing the modulation transfer function (MTF) of a digital printer. Especially designed test pages consisting of a series of patches, each with a different 1-D sinusoidal modulation, enable measurement of the dependence of the MTF on spatial frequency, bias point, modulation amplitude, spatial direction of modulation, and direction of modulation in the color space. Constant tone patches also yield the extreme and center color values for the input modulation. After calibrating the scanner specifically for the direction of modulation in the color space, we spatially project the scanned test patches in the direction orthogonal to the modulation to obtain a 1-D signal, and then project these sample points onto a line in the CIE L*a*b* color space between the extreme color values to obtain a perceptually relevant measure of the frequency response in a specific color direction. Appropriate normalization of the frequency response followed by compensation for the scanner MTF completes the procedure. For a specific inkjet printer using a dispersed-dot halftoning algorithm, we examine the impact of the above-mentioned parameters
on the printer MTF, and obtain results that are consistent with
the expected behavior of this combination of print mechanism and
halftoning algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calibration of scanners and cameras usually involves measuring the point spread function (PSF). When edge data is used to measure the PSF, the differentiation step amplifies the noise. A parametric fit of the functional form of the edge spread function (ESF) directly to the measured edge data is proposed to eliminate this. Experiments used to test this method show that the Cauchy functional form fits better than the Gaussian or other forms tried. The effect of using a functional form of the PSF that differs from the true PSF is explored by considering bilevel images formed by thresholding. The amount of mismatch seen can be related to the difference between the respective kurtosis factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper compares multi-step algorithms for estimating banding arameters of a harmonic signature model. The algorithms are based on two different spectral measures, the power spectrum (PS) and the collapsed average (CA) of the generalized spectrum. The generalized spectrum has superior noise reduction properties and is applied for the first time to this application. Monte Carlo simulations compare estimation performances of profile (or coherent) averaging and non-coherent spatial averaging for estimating banding parameters in grain noise. Results demonstrate that profile averaging has superior noise reduction properties, but is less flexible in applications with irregular banding patterns. The PS-based methods result in lower fundamental frequency estimation error and greater peak height stability for low SNR values, with coherent averaging being significantly superior to non-coherent averaging. The CA has the potential of simplifying the detection of multiple simultaneous banding patterns because its peaks are related to intra-harmonic distances; however, good CA estimation performance requires sufficiently regular harmonic phase patterns for the banding harmonics so as not to undergo reduction along with the noise. In addition to the simulations, the algorithms are applied to samples from inkjet and laser printers to demonstrate the ability of the harmonic signature model in separating banding from grain and other image artifacts. Good results from experimental data are demonstrated based on visual inspection of examples where banding and grain have been separated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Flatbed scanners have been adopted successfully in the measurement
of microscopic image artifacts, such as granularity and mottle, in
print samples because of their capability of providing full color, high resolution images. Accurate macroscopic color measurement relies on the use of colorimeters or spectrophotometers to provide a surrogate for human vision. The very different color response characteristics of flatbed scanners from any standard colorimetric response limits the utility of a flatbed scanner as a macroscopic color measuring device. This metamerism constraint can be significantly relaxed if our objective is mainly to quantify the color variations within a printed page or between pages where a small bias in measured colors can be tolerated as long as the color distributions relative to the individual mean values is similar. Two scenarios when converting color from the device RGB color space to a standardized color space such as CIELab are studied in this paper, blind and semi-blind color transformation, depending on the availability of the black channel information. We will show that both approaches offer satisfactory results in quantifying macroscopic color variation across pages while the semi-blind color transformation further provides fairly accurate color prediction capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
I am sure that everyone would agree that the standards that define colorimetric measurements (ISO 13655) and viewing conditions (ISO 3664)for graphic arts and photography, and ICC profile building (ISO 15076-1) must all be consistent with each other. More importantly we would all agree that together they must be consistent with current industry practice and be technically sound. However, as we begin the process of revising the color measurement and viewing standards we find that that is easier said than done. In each of these areas there seem to be a series of inconsistencies between what we do and what we say we should do. The real riddle is how do we reconcile them with each other, with industry practice, and also keep them technically sound. This paper looks at some of the issues, identifies the conflicts, and identifies some of the potential compromise positions. It also describes the steps that are being taken to develop revised versions of the key standards involved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ICC Workflow WG serves as the bridge between ICC color management technologies and use of those technologies in real world color production applications. ICC color management is applicable to and is used in a wide range of color systems, from highly specialized digital cinema color special effects to high volume publications printing to home photography. The ICC Workflow WG works to align ICC technologies so that the color management needs of these diverse use case systems are addressed in an open, platform independent manner. This report provides a high level summary of the ICC Workflow WG objectives and work to date, focusing on the ways in which workflow can impact image quality and color systems performance. The 'ICC Workflow Primitives' and 'ICC Workflow Patterns and Dimensions' workflow models are covered in some detail. Consider the questions, "How much of dissatisfaction with color management today is the result of 'the wrong color transformation at the wrong time' and 'I can't get to the right conversion at the right point in my work process'?" Put another way, consider how image quality through a workflow can be negatively affected when the coordination and control level of the color management system is not sufficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A small number of general visual attributes have been recognized as essential in describing image quality. These include micro-uniformity, macro-uniformity, colour rendition, text and line quality, gloss, sharpness, and spatial adjacency or temporal adjacency attributes. The multiple-part International Standard discussed here was initiated by the INCITS W1 committee on the standardization of office equipment to address the need for unambiguously documented procedures and methods, which are widely applicable over the multiple printing technologies employed in office applications, for the appearance-based evaluation of these visually significant image quality attributes of printed image quality. 1,2 The resulting proposed International Standard, for which ISO/IEC WD 19751-13 presents an overview and an outline of the overall procedure and common methods, is based on a proposal that was predicated on the idea that image quality could be described by a small set of broad-based attributes.4 Five ad hoc teams were established (now six since a sharpness team is in the process of being formed) to generate standards for one or more of these image quality attributes. Updates on the colour rendition, text and line quality, and gloss attributes are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ISO WD 19751 macro-uniformity team works towards the development of a standard for evaluation of perceptual image quality of color printers. The team specifically addresses the types of defects that fall in the category of macrouniformity, such as streaks, bands and mottle. The first phase of the standard will establish a visual quality ruler for macro-uniformity, using images with simulated macro-uniformity defects. A set of distinct, parameterized defects has been defined, as well as a method of combining the defects into a single image. The quality ruler will be a set of prints with increasing magnitude of the defect pattern. The paper will discuss the creation and printing of the simulated images, as well as initial tests of subjective evaluations using the ruler.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The standard ISO 12233 method for the measurement of spatial frequency response (SFR) for digital still cameras and scanners is based on the analysis of slanted-edge image features. The procedure, which applies a form edge-gradient analysis to an estimated edge spread function, requires the automated finding of an edge feature in a digital test image. A frequently considered (e.g., ISO 13660 and 19751) attribute of printed text and graphics is edge raggedness. There are various metrics aimed at the evaluation of the discontinuous imaging of nominally continuous features, but they generally rely on an estimation of the spatial deviation of edge or line boundaries, the tangential edge profile (TEP). In this paper, we describe how slanted-edge analysis can be adapted to the routine evaluation of line and edge quality. After locating and analyzing the edge feature, the TEP is estimated. The estimation of RMS deviation and edge spectrum are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, two ISO electronic imaging standards aimed at digital capture device dynamic range metrology have been issued. Both ISO 15739 (digital still camera noise) and ISO 21550 (film scanner dynamic range) adopt a signal-to-noise ratio (SNR) criterion for specifying dynamic range. To resiliently compare systems with differing mean-signal transfer, or Electro-Optical Conversion Functions (OECF), an incremental SNR (SNRi) is used. The exposure levels that correspond to threshold-SNR values are used as endpoints to determine measured dynamic range. While these thresholds were developed through committee consensus with generic device applications in mind, the methodology of these standards is flexible enough to accommodate different application requirements. This can be done by setting the SNR thresholds according to particular signal-detection requirements. We will show how dynamic range metrology, as defined in the above standards, can be interpreted in terms of statistical hypothesis testing and confidence interval methods for mean signal values. We provide an interpretation of dynamic range that can be related to particular applications based on contributing influences of variance, confidence intervals, and sample size variables. In particular, we introduce the role of the spatial-correlation statistics for both signal and noise sources, not covered in previous discussions of these ISO standards. This can be interpreted in terms of a signal's spatial frequency spectrum and noise power spectrum (NPS) respectively. It is this frequency aspect to dynamic range evaluation that may well influence future standards. We maintain that this is important when comparing systems with different sampling settings, since the above noise statistics are currently computed on a per-pixel basis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Image Quality Characterization and Modeling I
For more than thirty years imaging scientists have constructed metrics to predict psychovisually perceived image quality. Such metrics are based on a set of objectively measurable basis functions such as Noise Power Spectrum (NPS), Modulation Transfer Function (MTF), and characteristic curves of tone and color reproduction. Although these basis functions constitute a set of primitives that fully describe an imaging system from the standpoint of information theory, we found that in practical imaging systems the basis functions themselves are determined by system-specific primitives, i.e. technology parameters. In the example of a printer, MTF and NPS are largely determined by dot structure. In addition MTF is determined by color registration, and NPS by streaking and banding. Since any given imaging system is only a single representation of a class of more or less identical systems, the family of imaging systems and the single system are not described by a unique set of image primitives. For an image produced by a given imaging system, the set of image primitives describing that particular image will be a singular instantiation of the underlying statistical distribution of that primitive. If we know precisely the set of imaging primitives that describe the given image we should be able to predict its image quality. Since only the distributions are known, we can only predict the distribution in image quality for a given image as produced by the larger class of 'identical systems'. We will demonstrate the combinatorial effect of the underlying statistical variations in the image primitives on the objectively measured image quality of a population of printers as well as on the perceived image quality of a set of test images. We also will discuss the choice of test image sets and impact of scene content on the distribution of perceived image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a companion paper we discuss the impact of statistical variability on perceived image quality. Early in a development program, systems may not be capable of rendering images suitable for quality testing. This does not diminish the program need to estimate the perceived quality of the imaging system. During the development of imaging systems, simulations are extremely effective for demonstrating the visual impact of design choices, allowing both the development process to prioritize these choices and management to understand the risks and benefits of such choices. Where the simulation mirrors the mechanisms of image formation, it not only improves the simulation but also informs the understanding of the image formation process. Clearly the simulation process requires display or printing devices whose quality does not limit the simulation. We will present a generalized methodology. When used with common profile making and color management tools, it will provide simulations of both source and destination devices. The device to be simulated is modeled by its response to a fixed set of input stimuli. In the case of a digital still camera (DSC), these are the reflection spectra of a fixed set of color patches -e.g. the MacBeth DCC, and in the case of a printer, the set of image RGBs. We will demonstrate this methodology with examples of various print media systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of the prints produced by an inkjet printer is highly dependent on the characteristics of the dots produced by the inkjet pens. While some literature discusses metrics for the objective evaluation of print quality, few of the efforts have combined automated quality tests with subjective assessment. We develop an algorithm for analyzing printed dots and study the effects of the dot characteristics on the perceived print alignment. We establish the perceptual preferences of human observers via a set of psychophysical experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
System Image Quality Characterization and Modeling II
Several digital film restoration techniques have emerged during the last decade and became more and more automated but restoration evaluation still remains a rarely tackled issue. In the sphere of cinema, the image quality is judged visually. In fact, experts and technicians judge and determine the quality of the film images during the calibration (post production) process. As a consequence, the quality of a movie is also estimated subjectively by experts in the field of digital film restoration. On the other hand, objective quality metrics do not necessarily correlate well with perceived quality. Plus, some measures assume that there exists a reference in the form of an "original" to compare to, which prevents their usage in digital restoration field, where often there is no reference to compare to. That is why subjective evaluation is the most used and most efficient approach up to now. But subjective assessment is expensive, time consuming and does not respond, hence, to the economic requirements. After presenting the several defects than can affect cinematographic material, and the film digital restoration field, we present in this paper the issues of image quality evaluation in the field of digital film restoration and suggest some reference free objective measures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors developed a software-based realtime IPTV monitoring system based on Reduced Reference framework, and evaluated the proposed system. One of the quality issues of the IPTV service is the picture quality degradation caused by packet loss. The proposed system precisely estimates the PSNR of the corrupted received picture by extracting and comparing image features from transmission and receiver side. Computer simulations show that PSNR estimation with a
0.945 correlation coefficient at a data channel bitrate of 36kbps is possible using the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel image quality evaluation method, which is based on combination of the rigorous grating diffraction theory and the ray-optic method, is proposed. It is applied for design optimization and tolerance analysis of optical imaging systems implementing diffractive optical elements (DOE). The evaluation method can predict the quality and resolution of the image on the image sensor plane through the optical imaging system. Especially, we can simulate the effect of diffraction efficiencies of DOE in the camera lenses module, which is very effective for predicting different color sense and MTF performance. Using this method, we can effectively determine the fabrication tolerances of diffractive and refractive optical elements such as the variations in profile thickness, and the shoulder of the DOE, as well as conventional parameters such as decenter and tilt in optical-surface alignments. A DOE-based 2M-resolution camera lens module designed by the optimization process based on the proposed image quality evaluation method shows ~15% MTF improvement compared with a design without such an optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this study we investigate the visibility and annoyance of simulated defective sub-pixels in a liquid crystal display (LCD). The stimulus was a rectangular image containing one centered object with a gray surround and a single defective pixel. The surround was either uniform gray or a gray-level texture. The target was a simulated discolored pixel with one defective sub-pixel (green, red or blue) and two normally functioning sub-pixels. On each trial, it was presented at a random position. Subjects were asked to indicate if they saw a defective pixel, and if so, where it was located and how annoying it was. For uniform surrounds, our results show that detection probability falls slowly for green, faster for red, and fastest for blue as background luminance increases. When detection probability is plotted against luminance contrast green defective pixels are still most detectable, then red, then blue. Mean annoyance value falls faster than detection probability as background luminance increases, but the trends are the same. A textured surround greatly reduces the detection probability of all defective pixels. Still, green and red are more detectable than blue. With the textured surround the mean annoyance tends to remain high even when detection probability is quite low. For both types of surrounds, probability of detection is least for targets in the bottom region of the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As digital imagers continue to increase in size and pixel density, the detection of faults in the field becomes critical to delivering high quality output. Traditional schemes for defect detection utilize specialized hardware at the time of manufacture and are impractical for use in the field, while previously proposed software-based approaches tend to lead to quality-degrading false positive diagnoses. This paper presents an algorithm that utilizes statistical information extracted from a sequence of normally captured images to identify the location and type of defective pixels. Building on previous research, this algorithm utilizes data local to each pixel and Bayesian statistics to more accurately infer the likelihood of each defect, which successfully improves the detection time. Several defect types are considered, including pixels with one-half of the typical sensitivity and permanently stuck pixels. Monte Carlo simulations have shown that for defect densities of up to 0.5%, 50 ordinary images are sufficient to accurately identify all faults without falsely diagnosing good pixels as faulty. Testing also indicates that the algorithm can be extended to higher resolution imagers and to those with noisy stuck pixels, with only minimal cost to performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of the broadband network, video communications such as videophone, video distribution, and IPTV services are beginning to become common. In order to provide these services appropriately, we must manage them based on subjective video quality, in addition to designing a network system based on it. Currently, subjective quality assessment is the main method used to quantify video quality. However, it is time-consuming and expensive. Therefore, we need an objective quality assessment technology that can estimate video quality from video characteristics effectively. Video degradation can be categorized into two types: spatial and temporal. Objective quality assessment methods for spatial degradation have been studied extensively, but methods for temporal degradation have hardly been examined even though it occurs frequently due to network degradation and has a large impact on subjective quality. In this paper, we propose an objective quality assessment method for temporal degradation. Our approach is to aggregate multiple freeze distortions into an equivalent freeze distortion and then derive the objective video quality from the equivalent freeze distortion. Specifically, our method considers the total length of all freeze distortions in a video sequence as the length of the equivalent single freeze distortion. In addition, we propose a method using the perceptual characteristics of short freeze distortions. We verified that our method can estimate the objective video quality well within the deviation of subjective video quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital film restoration and special effects compositing require more and more automatic procedures for movie regraining. Missing or inhomogeneous grain decreases perceived quality. For the purpose of grain synthesis an existing texture synthesis algorithm has been evaluated and optimized. We show that this algorithm can produce synthetic grain which is perceptually similar to a given grain template, which has high spatial and temporal variation and which can be applied to multi-spectral images. Furthermore a re-grain application framework is proposed, which synthesises based on an input grain template artificial grain and composites this together with the original image content. Due to its modular approach this framework supports manual as well as automatic re-graining applications. Two example applications are presented, one for re-graining an entire movie and one for fully automatic re-graining of image regions produced by restoration algorithms. Low computational cost of the proposed algorithms allows application in industrial grade software.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional image quality assessments are mostly based on error analysis and the errors only stem from the absolute differences of pixel values or transform coefficients between the two compared images. With consideration of Human Vision System this paper proposes a quality assessment based on textural structure and normalized noise, SNPSNR. The time-frequency property of wavelet transform is utilized to represent images' textural structure and then the structural noise is figured as the difference between wavelet transform coefficients emphasized by textural structure. The noises on each level, i.e., each channel, are weighted by HVS. Due to the energy distribution property of wavelet transform, the noise quantity difference on each transform level is quite large and is not proportional to the influence caused by them. We normalize the structural noise on different levels by normalizing the coefficients on each level. SNPSNR computation adopting the PSNR form and the result data are fitted with Differential Mean Opinion Scores (DMOS) using logistic function. SNPSNR gains better performance when compared with MSSIM, HVSNR and PSNR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quality is an essential factor in multimedia communication, especially in compression and adaptation. Quality metrics can be divided into three categories: within-modality quality, cross-modality quality, and multi-modality quality. Most research has so far focused on within-modality quality. Moreover, quality is normally just considered from the perceptual perspective. In practice, content may be drastically adapted, even converted to another modality. In this case, we should consider the quality from semantic perspective as well. In this work, we investigate the multi-modality quality from the semantic perspective. To model the semantic quality, we apply the concept of "conceptual graph", which consists of semantic nodes and relations between the nodes. As an typical of multi-modality example, we focus on audiovisual streaming service. Specifically, we evaluate the amount of information conveyed by a audiovisual content where both video and audio channels may be strongly degraded, even audio are converted to text. In the experiments, we also consider the perceptual quality model of audiovisual content, so as to see the difference with semantic quality model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The staring imaging technique is one of the main research directions in the field of the opto-electronic imaging. Therefore, the analysis of imaging performance of staring imaging system is a key issue in the research work. It includes the following parameters, MTF, SNR, MRTD, distortion etc, among which MTF is one of the most important parameters in evaluating the performance of the detector and the system. In this paper, we report a thorough analysis on the characteristics of MTF from the spatial and frequency spectrum. This Fourier transform based analysis was performed on the dynamic imaging characteristics of staring imaging system. Furthermore, abundant experiments were made on the measurement of MTF of visible CCD and IRFPA, thereby the results was obtained for the performance analyzing of staring imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Field sequential color display technology uses the temporal properties of the human visual system in order to build a full colored frame by time sequential additive synthesis of a given number of sub-frames, e.g. the standard red, green, and blue primaries, or up to six primaries such as in the latest improvement made with color wheels. Because field sequential color display can exhibit a disturbing visual artifact called the rainbow effect or color breakup effect, we set up a psychophysical experiment to evaluate in an easy way the visual comfort of the standard observer according to the technology of color wheel used versus several significant parameters that must be taken into account when considering this effect. The study is an attempt to better understand the perception of the rainbow effect and the way to reduce it. It also provides data, results and discussion about color wheels, and their latest improvements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution, we present an objective assessment method for the evaluation, without reference, of the degradation of the video quality induced by the reduction of the temporal resolution. The assessment of the jerkiness perceived by a human observer is performed by feeding a multilayer neural network with the statistical distributions of the kinematics data (speed, acceleration and jerk of objects on the image plane) evaluated on a video shot. To identify the neural network (architecture and parameters) that best fit the human behavior, a subjective experiment has been performed. Validation of the model on the test set indicate a good match between the Mean Opinion Score (MOS) and the jerkiness indicator computed by the neural network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.