KEYWORDS: Reflectivity, RGB color model, Error analysis, Cameras, Digital imaging, Statistical analysis, Image compression, Sensors, Data modeling, Algorithm development
The problem of illumination estimation for color constancy and automatic white balancing of digital color imagery can be viewed as the separation of the image into illumination and reflectance components. We propose using nonnegative matrix factorization with sparseness constraints to separate these components. Since illumination and reflectance are combined multiplicatively, the first step is to move to the logarithm domain so that the components are additive. The image data is then organized as a matrix to be factored into nonnegative components. Sparseness constraints imposed on the resulting factors help distinguish illumination from reflectance. The proposed approach provides a pixel-wise estimate of the illumination chromaticity throughout the entire image. This approach and its variations can also be used to provide an estimate of the overall scene illumination chromaticity.
Color-based object indexing and matching is attractive because color is an efficient visual cue for characterizing an object. However, different light sources will lead to color deformation of objects, which will inevitably degrade the performance of recognition. In order to circumvent this confounding influence, we present three effective illumination-independent descriptors that are substantially insensitive to the variations of light conditions, object geometric transformation, and image blur level. To this end, we define two new two-dimensional color coordinate systems, the central color coordinate system and the edge-based color coordinate system, based on the diagonal-offset reflectance model. By introducing normalized moment invariants, this paper then provides two color-constant descriptors in each system separately. Either descriptor is a feature vector consisting of several normalized moment invariants of the color pixels' distribution with different orders. Furthermore, the combination thereof can characterize not only color content but also color edges in an image, so it serves as a third descriptor of the image. Experiments on real image databases with object recognition and image retrieval show that our descriptors are robust and perform very well under various circumstances.
Why do the human cones have the spectral sensitivities they do? We hypothesize that they may have evolved to their present form because their sensitivities are optimal in terms of their ability to recover the spectrum of incident light. As evidence in favor of this hypothesis, we compare the accuracy with which the incoming spectrum can be approximated by a three-dimensional linear model based on the cone responses and compare this to the optimal approximations
defined by models based on principal components analysis, independent component analysis, non-negative matrix factorization and non-negative independent component analysis. We introduce a new method of reconstructing spectra from the cone responses and show that the cones are almost as good as these optimal methods in estimating the
spectrum.
KEYWORDS: RGB color model, Calibration, LCDs, Digital Light Processing, Projection systems, Display technology, CRTs, Data modeling, Neural networks, Data analysis
The technique of support vector regression (SVR) is applied to the color display calibration problem. Given a set of training data, SVR estimates a continuous-valued function encoding the fundamental interrelation between a given input and its corresponding output. This mapping can then be used to find an output value for a given input value not in the training data set. Here, SVR is applied directly to the display's non-linearized RGB digital input values to predict output CIELAB values. There are several different linear methods for calibrating different display technologies (GOG, Masking and Wyble). An advantage of using SVR for color calibration is that the end-user does not need to apply a different calibration model for each different display technology. We show that the same model can be used to calibrate CRT, LCD and DLP displays accurately. We also show that the accuracy of the model is comparable to that of the optimal linear transformation introduced by Funt et al.
Conference Committee Involvement (6)
Digital Photography and Mobile Imaging XI
9 February 2015 | San Francisco, California, United States
Digital Photography X
3 February 2014 | San Francisco, California, United States
Digital Photography IX
4 February 2013 | Burlingame, California, United States
Digital Photography VIII
23 January 2012 | Burlingame, California, United States
Digital Photography VII
24 January 2011 | San Francisco Airport, California, United States
Digital Photography VI
18 January 2010 | San Jose, California, United States
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.