A novel code book based framework for blind image quality assessment is developed. The code words are designed according to the image pattern of joint conditional histograms among neighboring divisive normalization transform coefficients in degraded images. By extracting high dimensional perceptual features from different subjective score levels in the sample database, and by clustering the features to their centroids, the conditional histogram based code book is constructed. Objective image quality score is calculated by comparing the distances between extracted features and the code words. Experiments are performed on most current databases, and the results confirm the effectiveness and feasibility of the proposed approach.
We present a general purpose blind image quality assessment (IQA) method using the statistical independence hidden in the joint distributions of divisive normalization transform (DNT) representations for natural images. The DNT simulates the redundancy reduction process of the human visual system and has good statistical independence for natural undistorted images; meanwhile, this statistical independence changes as the images suffer from distortion. Inspired by this, we investigate the changes in statistical independence between neighboring DNT outputs across the space and scale for distorted images and propose an independence uncertainty index as a blind IQA (BIQA) feature to measure the image changes. The extracted features are then fed into a regression model to predict the image quality. The proposed BIQA metric is called statistical independence (STAIND). We evaluated STAIND on five public databases: LIVE, CSIQ, TID2013, IRCCyN/IVC Art IQA, and intentionally blurred background images. The performances are relatively high for both single- and cross-database experiments. When compared with the state-of-the-art BIQA algorithms, as well as representative full-reference IQA metrics, such as SSIM, STAIND shows fairly good performance in terms of quality prediction accuracy, stability, robustness, and computational costs.
State of the art blind image quality assessment (IQA) methods generally extract perceptual features from the training
images, and send them into support vector machine (SVM) to learn the regression model, which could be used to further
predict the quality scores of the testing images. However, these methods need complicated training and learning, and the
evaluation results are sensitive to image contents and learning strategies. In this paper, two novel blind IQA metrics
without training and learning are firstly proposed.
The new methods extract perceptual features, i.e., the shape consistency of conditional histograms, from the joint
histograms of neighboring divisive normalization transform coefficients of distorted images, and then compare the length
attribute of the extracted features with that of the reference images and degraded images in the LIVE database. For the
first method, a cluster center is found in the feature attribute space of the natural reference images, and the distance
between the feature attribute of the distorted image and the cluster center is adopted as the quality label. The second
method utilizes the feature attributes and subjective scores of all the images in the LIVE database to construct a
dictionary, and the final quality score is calculated by interpolating the subjective scores of nearby words in the
dictionary.
Unlike the traditional SVM based blind IQA methods, the proposed metrics have explicit expressions, which reflect the
relationships of the perceptual features and the image quality well. Experiment results in the publicly available databases
such as LIVE, CSIQ and TID2008 had shown the effectiveness of the proposed methods, and the performances are fairly
acceptable.
We recently proposed a natural scene statistics based image quality assessment (IQA) metric named STAIND, which
extracts nearly independent components from natural image, i.e., the divisive normalization transform (DNT)
coefficients, and evaluates perceptual quality of distortion image by measuring the degree of dependency between
neighboring DNT coefficients. To improve the performance of STAIND, its feature selection strategy is thoroughly
analyzed in this paper.
The basic neighbor relationships in STAIND include scale, orientation and space. By analyzing the joint histograms of
different neighborships and comparing the IQA model performances of diverse feature combination schemes on the
publicly available databases such as LIVE, CSIQ and TID2008, we draw the following conclusions: 1) Spatial neighbor
relationship contributes most to the model design, scale neighborship takes second place, and orientation neighbors
might introduce negative effects; 2) In spatial domain, second order spatial neighbors are beneficial supplements to first
order spatial neighbors; 3) The combined neighborship between the scales, spaces and the introduced spatial parents is
very efficient for blind IQA metrics design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.