PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9015, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color vision deficiency (CVD) is the inability, or limited ability, to recognize colors and discriminate between them. A person with this condition perceives a narrower range of colors compared to a person with normal color vision. In this study we concentrate on recoloring digital images in such a way that users with CVD, especially dichromats, perceive more details from the recolored images compared to the original ones. During this color transformation process, the goal is to keep the overall contrast of the image constant, while adjusting the colors that might cause confusion for the CVD user. In this method, RGB values at each pixel of the image are first converted into HSV values and, based on pre-defined rules, the problematic colors are adjusted into colors that are perceived better by the user. Comparing the simulation of the original image, as it would be perceived by a dichromat, with the same dichromatic simulation on the recolored image, clearly shows that our method can eliminate a lot of confusion for the user and convey more details. Moreover, an online questionnaire was created and a group of 39 CVD users confirmed that the transformed images allow them to perceive more information compared to the original images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe an analysis method of the omnidirectional color signals in natural scenes. A multiband imaging system with six spectral channels is used for capturing high resolution images in the omnidirectional observations at three locations on campus. The spectral distributions of color signals are recovered using the Wiener estimator from the captured six-band images. The spectral compositions of omnidirectional color signals are investigated based on the PCA of each set of color signals acquired at three locations in different seasons and different times of the day. Three principal components are extracted from three sets of omnidirectional images observed in three different locations. The respective three principal component curves are invariant under seasonal and temporal changes. Moreover, we determine the unified principal components of color signals across all locations. High data compression of omnidirectional images can be achieved. The reliability of the proposed analysis method is confirmed using various experimental data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes realistic fetus skin color processing using a 2D color map and a tone mapping function (TMF) for ultrasound volume rendering. The contributions of this paper are a 2D color map generated through a gamut model of skin color and a TMF that depends on the lighting position. First, the gamut model of fetus skin color is calculated by color distribution of baby images. The 2D color map is created using a gamut model for tone mapping of ray casting. For the translucent effect, a 2D color map in which lightness is inverted is generated. Second, to enhance the contrast of rendered images, the luminance, color, and tone curve TMF parameters are changed using 2D Gaussian function that depends on the lighting position. The experimental results demonstrate that the proposed method achieves better realistic skin color reproduction than the conventional method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Skin colors are important for a broad range of imaging applications to assure quality and naturalness. We discuss the impact of various metadata on skin colors in images, i.e. how does the presence of a metadata attribute influence the expected skin color distribution for a given image. For this purpose we employ a statistical framework to automatically build color models from image datasets crawled from the web. We assess both technical and semantic metadata and show that semantic metadata has a more significant impact. This suggests that semantic metadata holds important cues for processing of skin colors. Further we demonstrate that the refined skin color models from our automatic framework improve the accuracy of skin detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Material Colors: Joint Session with Conferences 9015 and 9018
A method has been proposed, whereby k-means clustering technique is applied to segment microscale single color halftone image into three components—solid ink, ink/paper mixed area and unprinted paper. The method has been evaluated using impact (offset) and non-impact (electro-photography) based single color prints halftoned by amplitude modulation (AM) and frequency modulation (FM) technique. The print samples have also included a range of variations in paper substrates. The colors of segmented regions have been analyzed in CIELAB color space to reveal the variations, in particular those present in mixed regions. The statistics of intensity distribution in the segmented areas have been utilized to derive expressions that can be used to calculate simple thresholds. However, the segmented results have been employed to study dot gain in comparison with traditional estimation technique using Murray-Davies formula. The performance of halftone reflectance prediction by spectral Murray-Davies model has been reported using estimated and measured parameters. Finally, a general idea has been proposed to expand the classical Murray-Davies model based on experimetal observations. Hence, the present study primarily presents the outcome of experimental efforts to characterize halftone print media interactions in respect to the color prediction models. Currently, most regression-based color prediction models rely on mathematical optimization to estimate the parameters using measured average reflectance of a large area compared to the dot size. While this general approach has been accepted as a useful tool, experimental investigations can enhance understanding of the physical processes and facilitate exploration of new modeling strategies. Furthermore, reported findings may help reduce the required number of samples that are printed and measured in the process of multichannel printer characterization and calibration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This research investigates multi-channel inkjet printing methods, which deviate from standard colour management workflows by reflecting on art historical processes, including the construction of colour in old master works, to reproduce specific colour pigment mixes in print. This is approached by incorporating artist colour mixing principles relevant to traditional art making processes through direct n-channel printing and the implementation of multiple pass printing. By demanding specific ink colourants to be employed in print, as well as the application of mixing colour though layering, we can mimic the effects of the traditional processes. These printing methods also generate colour through a variety of colour mixtures that may not have been employed or achieved by the printer driver. The objective of this research is to explore colour mixing and layering techniques in the printing of inkjet reproductions of original artworks that will maintain subtle colour transitions in dark shadow regions. While these colours are lost in traditional inkjet reproduction, by using direct n-channel editing capabilities to reproduce a painted original with high dynamic range we can improve colour variation in the shadow regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In printing, halftoning algorithms are applied in order to reproduce a continuous-tone image by a binary printing system. The image is transformed into a bitmap composed of dots varying in size and/or frequency. Nevertheless, this causes that the sparse dots found in light shades of cyan (C) and magenta (M) appear undesirably noticeable against white substrate. The solution is to apply light cyan (Lc) and light magenta (Lm) inks in those regions. In order to predict the color of CMYLcLm prints, we make use of the fact that Lc and Lm have similar spectral characteristics as C and M respectively. The goal of this paper is to present a model to characterize a five-channel CMYLcLm printing system using a three-channel color prediction model, where we treat the ink combinations Lc+C and Lm+M as new compound inks. This characterization is based on our previous three-channel CMY color prediction model that is capable of predicting both colorimetric tri-stimulus values and spectral reflectance. The drawback of the proposed model in this paper is the requirement of large number of training samples. Strategies are proposed to reduce this number, which resulted in expected larger but acceptable color differences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The tone value increase in halftone printing commonly referred to as dot gain actually encompasses two fundamentally different phenomena. Physical dot gain refers to the fact that the size of the printed halftone dots differs from their nominal size, and is related to the printing process. Optical dot gain originates from light scattering inside the substrate, causing light exchanges between different chromatic areas. Due to their different intrinsic nature, physical and optical dot gains need to be treated separately. In this study, we characterize and compare the dot gain properties for offset prints on coated and uncoated paper, using AM and first and second generation FM halftoning. Spectral measurements are used to compute the total dot gain. Microscopic images are used to separate the physical and optical dot gain, to study ink spreading and ink penetration, and to compute the Modulation Transfer Function (MTF) for the different substrates. The experimental results show that the physical dot gain depends on ink penetration and ink spreading properties. Microscopic images of the prints reveal that the ink penetrates into the pores and cavities of the uncoated paper, resulting in inhomogeneous dot shapes. For the coated paper, the ink spread on top of the surface, giving a more homogenous dot shape, but also covering a larger area, and hence larger physical dot gain. The experimental results further show that the total dot gain is larger for the uncoated paper, because of larger optical dot gain. The effect of optical dot gain depends on the lateral light scattering within the substrate, the size of the halftone dots, and on the halftone dot shape, especially the dot perimeter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel approach of tone mapping as gamut mapping in a high-dynamic-range (HDR) color space. High- and low-dynamic-range (LDR) images as well as device gamut boundaries can simultaneously be represented within such a color space. This enables a unified transformation of the HDR image into the gamut of an output device (in this paper called HDR gamut mapping). An additional aim of this paper is to investigate the suitability of a specific HDR color space to serve as a working color space for the proposed HDR gamut mapping. For the HDR gamut mapping, we use a recent approach that iteratively minimizes an image-difference metric subject to in-gamut images. A psychophysical experiment on an HDR display shows that the standard reproduction workflow of two subsequent transformations – tone mapping and then gamut mapping – may be improved by HDR gamut mapping.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications based on luminance processing often face the problem of recovering the original chrominance in the output color image. A common approach to reconstruct a color image from the luminance output is by preserving the original hue and saturation. However, this approach often produces a highly colorful image which is undesirable. We develop a color preservation method that not only retains the ratios of the input tri-chromatic values but also adjusts the output chroma in an appropriate way. Linearizing the output luminance is the key idea to realize this method. In addition, a lightness difference metric together with a colorfulness difference metric are proposed to evaluate the performance of the color preservation methods. It shows that the proposed method performs consistently better than the existing approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new EOTF based on human perception, called PQ (Perceptual Quantizer), was proposed in a previous work (SMPTE Mot. Imag. J 2013, 122:52-59) and its performance was evaluated for a wide range of luminance levels and encoding bitdepth values. This paper is an extension of that previous work to include the color aspects of the PQ signal encoding. The efficiency of the PQ encoding and bit-depth requirements were evaluated and compared for standard color gamuts of Rec 709 (SRGB), and the wide color gamuts of Rec 2020, P3, and ACES for a variety of signal representations as RGB, YCbCr, and XYZ. In a selected color space for any potential local gray level 26 color samples were simulated by deviating one quantization step from the original color in each signal dimension. The quantization step sizes were simulated based on the PQ and gamma curves for different bit-depth values and luminance ranges for each of the color gamut spaces and signal representations. Color differences between the gray field and the simulated color samples were computed using CIE DE2000 color difference equation. The maximum color difference values (quantization error) were used as a metric to evaluate the performance of the corresponding EOTF curve. Extended color gamuts were found to require more bits to maintain low quantization error. Extended dynamic range required fewer additional bits in to maintain quantization error. Regarding the visual detection thresholds, the minimum bit-depth required by the PQ and gamma encodings are evaluated and compared through visual experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color image processing algorithms are first developed using a high-level mathematical modeling language. Current integrated development environments offer libraries of intrinsic functions, which on one hand enable faster development, but on the other hand hide the use of fundamental operations. The latter have to be detailed for an efficient hardware and/or software physical implementation. Based on the experience accumulated in the process of implementing a segmentation algorithm, this paper outlines a design for implementation methodology comprised of a development flow and associated guidelines. The application of this methodology to four segmentation algorithm steps produced measured results with 2-D correlation coefficients (CORR2) better than 0.99, peak-signal-to-noise-ratio (PSNR) better than 70 dB, and structural-similarity-index (SSIM) better than 0.98, for a majority of test cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Histogram equalization is one of the well-known methods for contrast enhancement. However, the conventional contrast enhancement methods based on histogram equalization still show some problems such as washed out appearance and gradation artifact. To overcome such drawbacks, we propose a novel dynamic histogram equalization method based on gray level labeling method. The main contribution of the proposed method is to expand the dynamic range of the subhistogram up to the entire dynamic range of the input image while the intensity orders of adjacent pixels are preserved. The proposed method first decomposes the image histogram into a number of sub-histograms based on gray level labeling method. A full dynamic range of input gray level is assigned to each sub-histogram and each transform function is calculated based on the bi-histogram equalization method. Finally, a contrast enhanced pixel value is the weighted average of the results from each transform function. Experimental results show that the proposed method produces better contrast enhanced images than several histogram equalization based methods without introducing several side effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper is interested in indexing lecture videos for semantic search of learning material. We present a comparative study between DCT, Grayscale and Marginal space using classical k-means technique. For stable segmentation we propose to introduce an automatic threshold based on moment preservation. We discuss the suitability of each space on different images and then focus on educational video frames that are not so predominant in color and present the best transformation technique for segmentation of lecture video frame. We also present a technique to localize the slide based on some heuristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital copiers are now widely used. One major issue for a digital copier is copy quality. In order to achieve as high quality as possible for every input document, multiple processing pipelines are included in a digital copier. Every processing pipeline is designed specifically for a certain class of document, which may be text, picture, or a mixture of both as is illustrated by the three examples shown in Fig. 1. In this paper, we describe an algorithm that can effectively classify an input image into its corresponding category. Publisher’s Note: The first printing of this volume was completed prior to the SPIE Digital Library publication and this paper has since been replaced with a corrected/revised version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the wide use of mobile devices, display color reproduction has become extremely important. The purpose of this study is to investigate the optimal color temperature for mobile displays under varying illuminants. The effect of the color temperature and the illuminance of ambient lighting on user preferences were observed. For a visual examination, a total of 19 nuanced whites were examined under 20 illuminants. A total of 19 display stimuli with different color temperatures (2,500 K ~ 19,600 K) were presented on an iPad3 (New iPad). The ambient illuminants ranged in color temperature from 2,500 K to 19,800 K and from 0 lx to 3,000 lx in illuminance. Supporting previous studies of color reproduction, there was found to be a positive correlation between the color temperature of illuminants and that of optimal whites. However, the relationship was not linear. Based on assessments by 56 subjects, a regression equation was derived to predict the optimal color temperature adjustment under varying illuminants, as follows: [Display Tcp = 5138.93 log(Illuminant Tcp) – 11956.59, p<.001, R2=0.94]. Moreover, the influence of an illuminant was positively correlated with the illuminance level, confirming the findings of previous studies. It is expected that the findings of this study can be used as the theoretical basis when designing a color strategy for mobile display devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional methodologies for primary selection usually consider the optimization of parameters that characterize the global performance of the display system, such as the luminance of the white point, gamut volume, and power consumption. We propose a methodology for primary design that optimizes a figure of merit designed to favor gamuts for which maximum luminance at each chromaticity is uniformly related to the corresponding maximum luminance over the set of optimal colors. We contrast the results obtained with the proposed methodology with those obtained by an alternative strategy based on the optimization of gamut volume, and analyze differences in performance between these approaches for both three and four primary systems. Results indicate that the global vs local design choices result in significantly different primary designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital still cameras generally use an optical low-pass filter(OLPF) to enhance the image quality by removing high spatial frequencies causing aliasing. While eliminating the OLPF can save manufacturing costs, images captured without using an OLPF include moiré in the high spatial frequency region of the image. Therefore, to reduce the presence of moiré in a captured image, this paper presents a moiré reduction method without the use of an OLPF. First, the spatial frequency response(SFR) of the camera is analyzed and moiré regions detected using patterns related to the SFR of the camera. Using these detected regions, the moiré components represented by the inflection point between the high frequency and DC components in the frequency domain are selected and then removed. Experimental results confirm that the proposed method can achieve moiré reduction while preserving detail information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The purpose of this study is to achieve color consistency in smartphone displays under varying illuminants focusing on the correlated color temperature of the white point. In the two experiments, asymmetric color matching sessions were conducted, in which subjects were asked to recall a target white point among differently nuanced white colors. In Experiment I (N=58), 6 target white points varying from 5,900 K to 11,300 K, and 15 nuanced white colors varying from 2,700 K to 19,200 K were produced. The recalling test was carried out under 11 illuminants varying between 2,500 K and 19,300 K. Both display white colors and illuminants were divided into intervals of approximately 1,000 K. The study observed a shift in the recall of the target white point. The direction of the shift had a tendency toward higher color temperature. However, when the target white points were between 5,900 K and 8,000 K, the effect of the illuminants on color recall was marginal. In order to confirm the weak effect of the illuminants, Experiment II particularly focused on this color temperature range. 3 target white points were chosen which corresponded to the color temperatures of 6,600 K, 7,000 K, and 7,500 K, respectively. The visual assessment was conducted with a group of graphic design experts, and the 33 nuanced white colors used for the comparison had intervals of approximately 200 K. The study revealed that the maximum shift in color temperature was 294 K, which is in agreement with the result of Experiment I.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On field sequential color (FSC) displays, a frame of an image is represented by sequentially displaying multiple fields. In the case of FSC LCD displays, color image is displayed by sequential control of backlights and modulation of light transmissions. Sequential displaying of multiple fields results in undesirable color artifacts near boundaries of moving objects. They are often called as color breakup (CBU). This paper presents a new method to measure perceived static CBU on natural color images. The proposed method can be utilized for performance evaluation or algorithm development for static CBU reduction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High-end monitors and TVs based on LCD technology continue to increase their native display resolution to 4k by 2k and beyond. Subsequently, uncompressed pixel amplitude processing becomes costly not only when transmitting over cable or wireless communication channels, but also when processing with array processor architectures. For motion video content, spatial preprocessing from YCbCr 444 to YCbCr 420 is widely accepted. However, due to spatial low pass filtering in horizontal and vertical direction, quality and readability of small text and graphics content is heavily compromised when color contrast is high in chrominance channels. On the other hand, straight forward YCbCr 444 compression based on mathematical error coding schemes quite often lacks optimal adaptation to visually significant image content. We present a block-based memory compression architecture for text, graphics, and video enabling multidimensional error minimization with context sensitive control of visually noticeable artifacts. As a result of analyzing image context locally, the number of operations per pixel can be significantly reduced, especially when implemented on array processor architectures. A comparative analysis based on some competitive solutions highlights the effectiveness of our approach, identifies its current limitations with regard to high quality color rendering, and illustrates remaining visual artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An important component of camera calibration is to derive a mapping of a camera’s output RGB to a device independent color space such as the CIE XYZ or sRGB6. Commonly, the calibration process is performed by photographing a color chart in a scene under controlled lighting and finding a linear transformation M that maps the chart’s colors from linear camera RGB to XYZ. When the XYZ values corresponding to the color chart’s patches are measured under a reference illumination, it is often assumed that the illumination across the chart is uniform when it is photographed. This simplifying assumption, however, often is violated even in such relatively controlled environments as a light booth, and it can lead to inaccuracies in the calibration. The problem of color calibration under non-uniform lighting was investigated by Funt and Bastani2,3. Their method, however, uses a numerical optimizer, which can be complex to implement on some devices and has a relatively high computational cost. Here, we present an irradiance-independent camera color calibration scheme based on least-squares regression on the unit sphere that can be implemented easily, computed quickly, and performs comparably to the previously suggested technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes two novel approaches to Video Quality Assessment (VQA). Both approaches attempt to develop video evaluation techniques capable of replacing human judgment when rating video quality in subjective experiments. The underlying study consists of selecting fundamental quality metrics based on Human Visual System (HVS) models and using artificial intelligence solutions as well as advanced statistical analysis. This new combination enables suitable video quality ratings while taking as input multiple quality metrics. The first method uses a neural network based machine learning process. The second method consists in evaluating the video quality assessment using non-linear regression model. The efficiency of the proposed methods is demonstrated by comparing their results with those of existing work done on synthetic video artifacts. The results obtained by each method are compared with scores from a database resulting from subjective experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose a mathematical framework for multi-bit aperiodic clustered dot halftoning based on the Direct Multi-bit Search (DMS) algorithm. A pixel validation map is provided to the DMS algorithm to guide the formation of homogeneous clusters. The DMS algorithm operates without any user defined guidance, iteratively choosing the best drop absorptance level. An array of valid pixels is computed after each iteration that restricts the selection of pixels available to the DMS algorithm, improving the dot clustering. This process is repeated throughout the entire range of gray levels to create a visually pleasing multi-bit halftone screen. The resultant mask exhibits smoother appearance and improved detail rendering, compared to conventional clustered dot halftoning. Much of the improvements originate from the improved sampling of the aperiodic hybrid screen designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many halftoning applications there is a need to generate a dispersed pattern of dots that is pleasing to the eye. We propose a method based on Riesz energy minimization that produces superior patterns than other techniques such as Direct Binary Search and k-means. We illustrate the proposed technique and discuss implementation issues including nonlinear programming techniques and memory-constrained implementations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The well-known Yule-Nielsen modified spectral Neugebauer model is one of the most accurate predictive models for the spectral reflectance of printed halftone colors which expresses the spectral reflectance of halftones raised to the power 1/n as a linear combination of the spectral reflectance of the fulltone colors (Neugebauer primaries) also raised to the power 1/n, where n is a tunable real number. The power 1/n transform, characteristic of the Yule-Nielsen transform, empirically models the nonlinear relationship between the spectral reflectances of halftones and fulltones due to the internal propagation of light by scattering into the printing support, a phenomenon known as “optical dot gain” or “Yule- Nielsen effect”. In this paper, we propose a graphical method permitting to observe this non-linear relationship in the case of single-ink halftones and to experimentally check the capacity of the Yule-Nielsen model to predict it accurately. In the case where the Yule-Nielsen transform is not well adapted to the considered type of prints, we propose alternative transforms in order to improve the prediction accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With digital printing systems, the achievable screen angles and frequencies are limited by the finite address- ability of the marking engine. In order for such screens to generate dot clusters in which each cluster is identical, the elements of the periodicity matrix must be integer-valued, when expressed in units of printer-addressable pixels. To achieve a better approximation to the screen sets used for commercial offset printing, irregular screens can be used. With an irregular screen, the elements of the periodicity matrix are rational numbers. In this paper, we describe a procedure for design of high-quality irregular screens. We start with the design of the midtone halftone pattern. We then propose an algorithm to determine how to add dots from midtone to shadow and how to remove dots from mid-tone to highlight. We present experimental results illustrating the quality of the halftones resulting from our design procedure by comparing images halftoned with irregular screens using our approach and a template-based approach. Publisher’s Note: The first printing of this volume was completed prior to the SPIE Digital Library publication and this paper has since been replaced with a corrected/revised version.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the process of electrophotograpic (EP) printing, the deposition of toner to the printer-addressable pixel is greatly influenced by the neighboring pixels of the digital halftone. To account for these effects, printer models can either be embedded in the halftoning algorithm, or used to predict the printed halftone image at the input to an algorithm that is used to assess print quality. Most recently,1 we developed a series of six new models to accurately account for local neighborhood effects and the influence of a 45 x 45 neighborhood of pixels on the central printer-addressable pixel. We refer to all these models as black-box models, since they are based solely on measuring what is on the printed page, and do not incorporate any information about the marking process itself. In this paper, we will compare black-box models developed with three different capture devices: an Epson Expression 10000XL (Epson America, Inc., Long Beach, CA, USA) flatbed scanner operated at 2400 dpi with an active field of view of 309.88 mm x 436.88 mm, a QEA PIAS-II (QEA, Inc., Billerica, MA, USA) camera with resolution 7663.4 dpi and a field of view of 2.4 mm x 3.2 mm, and Dr. CID, a 1:1 magnification 3.35 micron true resolution Dyson Relay lens-based 3 Mpixel USB CMOS imaging device2 with resolution 7946.8 dpi and a field of view of 4.91 mm 6.55 mm developed at Hewlett-Packard Laboratories { Bristol. Our target printer is an HP Indigo 5000 Digital Press (HP Indigo, Ness Ziona, Israel). In this paper, we will compare the accuracy of the black-box model predictions of print microstructure using models trained from images captured with these three devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines adding visually significant, human recognizable data into QR codes without affecting their machine readability by utilizing known methods in image processing. Each module of a given QR code is broken down into pixels, which are halftoned in such a way as to keep the QR code structure while revealing aspects of the secondary image to the human eye. The loss of information associated to this procedure is discussed, and entropy values are calculated for examples given in the paper. Numerous examples of QR codes with embedded images are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In High-Dynamic-Range (HDR) imaging, optical veiling glare sets the limits of accurate scene information recorded by a camera. But, what happens at the beach? Here we have a Low-Dynamic-Range (LDR) scene with maximal glare. Can we calibrate a camera at the beach and not be burnt? We know that we need sunscreen and sunglasses, but what about our cameras? The effect of veiling glare is scene-dependent. When we compare RAW camera digits with spotmeter measurements we find significant differences. As well, these differences vary, depending on where we aim the camera. When we calibrate our camera at the beach we get data that is valid for only that part of that scene. Camera veiling glare is an issue in LDR scenes in uniform illumination with a shaded lens.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Everybody views and uses color from early childhood onwards. But this magnificent property of all objects around us turns out to be elusive if you try to specify it and communicate it to another person. Also, people often don’t know what effects color may have under different conditions. However, color is so important and omnipresent, that people can hardly avoid to 'rely on it' – so they do, in particular on its predictability. Thus, there is a discrepancy between the seeming self-evidence of color and the difficulty in specifying it accurately, for the prevailing circumstances. In order to analyze this situation, and possibly remedy it, a short historic perspective of the utilization and specification of color is given. The 'utilization' includes the emotional effects of color, which are important in, for instance, interior decorating but also play a role in literature and religion. 'Specification' begins with the early efforts by scientists, philosophers and artists to bring some order and understanding in what was observed with and while using color. Color has a number of basic functions: embellishment; attracting attention; coding; and bringing order in text by causing text parts presented in the same color to be judged as belonging together. People with a profession that involves color choices for many others, such as designers and manufacturers of products, including electronic visual displays, should have a fairly thorough knowledge of colorimetry and color perception. Unfortunately, they often don’t, simply because for 'practitioners' whose work involves different aspects, applying color being only one of those, the available tools for specifying and applying color turn out to be too difficult to use. Two consequences of an insufficient knowledge of the effects color may have are given here. The first of these consequences, on color blindness, relates to 8% of the population, but the second one, on reading colored text, bears on everyone. Practical guidance is given, especially on color and legibility. Anyway, the available tools mentioned, such as chromaticity diagrams and color spaces, are mainly the responsibility of the CIE. It would therefore be a laudable initiative if the CIE would not only refine their present systems, but devote some time and energy to the development of a simpler color specification and measurement system. With that it would be worth trying ergonomics research principles, to begin with end-user involvement: investigating what color science practitioners really want and need.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.