KEYWORDS: Data modeling, Colorimetry, Visualization, Visual process modeling, Spatial frequencies, Contrast sensitivity, Modulation, Calibration, Eye models, RGB color model
Inspired by the ModelFest and ColorFest data sets, a contrast sensitivity function was measured for a wide range
of adapting luminance levels. The measurements were motivated by the need to collect visual performance data
for natural viewing of static images at a broad range of luminance levels, such as can be found in the case of high
dynamic range displays. The detection of sine-gratings with Gaussian envelope was measured for achromatic
color axis (black to white), two chromatic axes (green to red and yellow-green to violet) and two mixed chromatic
and achromatic axes (dark-green to light-pink, and dark yellow to light-blue). The background luminance varied
from 0.02 to 200 cd/m2. The spatial frequency of the gratings varied from 0.125 to 16 cycles per degree. More
than four observers participated in the experiments and they individually determined the detection threshold
for each stimulus using at least 20 trials of the QUEST method. As compared to the popular CSF models, we
observed higher sensitivity drop for higher frequencies and significant differences in sensitivities in the luminance
range between 0.02 and 2 cd/m2. Our measurements for chromatic CSF show a significant drop in sensitivity with
luminance, but little change in the shape of the CSF. The drop of sensitivity at high frequencies is significantly
weaker than reported in other studies and assumed in most chromatic CSF models.
Many visual difference predictors (VDPs) have used basic psychophysical data (such as ModelFest) to calibrate the
algorithm parameters and to validate their performances. However, the basic psychophysical data often do not contain
sufficient number of stimuli and its variations to test more complex components of a VDP. In this paper we calibrate the
Visual Difference Predictor for High Dynamic Range images (HDR-VDP) using radiologists' experimental data for
JPEG2000 compressed CT images which contain complex structures. Then we validate the HDR-VDP in predicting the
presence of perceptible compression artifacts. 240 CT-scan images were encoded and decoded using JPEG2000
compression at four compression ratios (CRs). Five radiologists participated to independently determine if each image
pair (original and compressed images) was indistinguishable or distinguishable. A threshold CR for each image, at which
50% of radiologists would detect compression artifacts, was estimated by fitting a psychometric function. The CT
images compressed at the threshold CRs were used to calibrate the HDR-VDP parameters and to validate its prediction
accuracy. Our results showed that the HDR-VDP calibrated for the CT image data gave much better predictions than the
HDR-VDP calibrated to the basic psychophysical data (ModelFest + contrast masking data for sine gratings).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.