This work explores how to efficiently incorporate both the multi-scale features and attention mechanism into blind image quality assessment modules and proposes an end-to-end multi-scale attention guided deep neural network for perceptual quality assessment. Our method is established on a hierarchical learning framework in which two learning stages including coarse learning upon single-scale and quality refinement upon multi-scale, by which the quality-aware features could effectively extracted and aggregated into quality prediction scores. The proposed MSANet is based on the observation that multi-scale features could provide more flexible and robust features for BIQA whilst attention mechanism are beneficial for quality-aware feature aggregating. Through performance comparison with the state-of-theart approaches, our proposed mothed shows promising potential for blindly measuring the perceptual quality of distorted images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.