KEYWORDS: RGB color model, Printing, Environmental monitoring, Color management, Color difference, Color reproduction, Image processing, Visualization, Time metrology, Data conversion
In current printing technique, the Color Management System uses the ICC profiles of monitor and printer to perform
color matching. Unfortunately the ICC profile cannot capture all of the monitor color reproduction characteristics,
because such features change when the user acts on the color temperature, brightness and contrast controls, and they also
depend on the kind of backlighting and lifetime of LCD monitor. As a result there is usually an unwanted color
difference between an image displayed on the user monitor and its printed version. Yet, once we are able to produce an
ICC profile that matches the user's monitor characteristics by measuring, then the CMS becomes able to correctly
perform color matching. However, this method is of difficult application, because in general the measuring equipment is
not available and, even then, it takes a long time and new measurements according to monitor color temperature,
brightness and contrast. In this paper we propose a color matching technique based on estimate of the user's environment
through the simple visual test with an output image on monitor and its printed image. The estimated characteristic of
monitor is stored in new ICC profile and applied to color conversion process. Consequently the proposed method
reduced the color difference between image displayed on user monitor and its printed image.
With the current trend of digital convergence in mobile phones, mobile manufacturers are researching how to develop a
mobile beam-projector to cope with the limitations of a small screen size and to offer a better feeling of movement while
watching movies or satellite broadcasting. However, mobile beam-projectors may project an image on arbitrary surfaces,
such as a colored wall and paper, not on a white screen mainly used in an office environment. Thus, color correction
method for the projected image is proposed to achieve good image quality irrespective of the surface colors. Initially,
luminance values of original image transformed into the YCbCr space are changed to compensate for spatially nonuniform
luminance distribution of arbitrary surface, depending on the pixel values of surface image captured by mobile
camera. Next, the chromaticity values for each surface and white-screen image are calculated using the ratio of the sum
of three RGB values to one another. Then their chromaticity ratios are multiplied by converted original image through an
inverse YCbCr matrix to reduce an influence of modulating the appearance of projected image due to spatially different
reflectance on the surface. By projecting corrected original image on a texture pattern or single color surface, the image
quality of projected image can be improved more, as well as that of projected image on white screen.
This paper proposes a colorization method that uses wavelet packet sub-bands to embed color components. The
proposed method, firstly, involves a color-to-gray process, in which an input RGB image is converted into Y, Cb, and
Cr images, and a wavelet packet transform applied to Y image to divide it into 16 sub-bands. The Cb and Cr images are
then embedded into two sub-bands that include minimum information on the Y image. Once the inverse wavelet packet
transform is carried out, a new gray image with texture is obtained, where the color information appears as texture
patterns that are changed according to the Cb and Cr components. Secondly, a gray-to-color process is performed. The
printed textured-gray image is scanned and divided into 16 sub-bands using a wavelet packet transform to extract the Cb
and Cr components, and an inverse wavelet packet transform is used to reconstruct the Y image. At this time, the
original information is lost in the color-to-gray process. Nonetheless, the details of the reconstructed Y image are almost
the same as those in the original Y image because it uses sub-bands with minimum information to embed the Cb and Cr
components. The RGB image is then reconstructed by combining the Y image with the Cb and Cr images. In addition,
to recover color saturations more accurately, gray patches for compensating the characteristics of printers and scanners
are used. As a result, the proposed method can improve both the boundary details and the color saturations in recovered
color images.
Until now, many of mobile display manufacturers try to improve the contrast ratio, viewing angle, and backlightluminance
for its color fidelity and image quality. However, with the multimedia convergence, various imaging devices
have been made smaller and loaded in a mobile phone as independent modules, which brings about the necessity of the
color consistency between each module. Especially, with the population and rapid growth of mobile camera, it is
important for mobile LCD to reproduce realistically and accurately the object color of a moving-picture transmitted by a
mobile camera. Therefore, we developed a real-time color matching system between mobile camera and mobile LCD
based on a 16-bit lookup table (LUT) design. As a result, a moving-picture is realistically and accurately reproduced on
mobile LCD by applying the proposed lookup table to mobile display.
KEYWORDS: Printing, Reflectivity, CMYK color model, Color imaging, Color difference, RGB color model, Spectral models, Patents, Imaging systems, Graphic arts
This paper proposes a method of colorimetric characterization based on the color correlation between the distributions of colorant amounts in a CMYKGO printer. In colorimetric characterization beyond three colorants, many color patches with different combinations of colorant amounts can be used to represent the same tri-stimulus value. Therefore, choosing the proper color patches corresponding each tri-stimulus value is important for a CMYKGO printer characterization process. As such, the proposed method estimates the CIELAB value for many color patches, then selects certain color patches while considering high fidelity and the extension of the gamut. The selection method is divided into two steps. First, color patches are selected based on their global correlation, i.e. their relation to seed patches on the gray axis, and become the reference for correlation. However, even though a selected color patch may have a similar overall distribution to the seed patch, if the correlation factor is smaller than the correlation factors for neighboring patches, the color patch needs to be reselected. Therefore, in the second step, the color patch is reselected based on the local correlation with color patches that have a lower correlation factor with the seed patch. Thus, to reselect the color patch, the seed patch is changed to the average distribution of eight neighboring selected color patches, and the new color patch selected considering the new correlation factor. Consequently, the selected color patches have a similar distribution to their neighboring color patches. The selected color patches are then measured for accuracy, and the relation between the digital value and the tristimulus value for the color patches stored in a lookup table. As a result of this characterization, the gamut is extended in the dark regions and the color difference reduced compared to conventional characterization methods.
KEYWORDS: Color difference, Printing, RGB color model, Visualization, CMYK color model, Eye, Color imaging, Diffusion, Spectrophotometry, Image quality
This paper proposes an improved six-color separation method that reduces the graininess in middle tone regions based on the standard deviation of lightness and chrominance in SCIELAB space. Graininess is regarded as the visual perception of the fluctuation of the lightness of light cyan and cyan or light magenta and magenta. In conventional methods, granularity is extremely heuristic and inaccurate due to the use of a visual examination score. Accordingly, this paper proposes an objective method for calculating granularity for six-color separation. First, the lightness, redness-greenness,
and yellowness-blueness of SCIELAB space is calculated, reflecting the spatial-color sensitivity of the human eye and the sum of the three standard deviations normalized. Finally, after assigning the proposed granularity to a lookup table, the objective granularity is applied to six-color separation , thereby reducing the graininess in middle tone regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.