KEYWORDS: RGB color model, 3D displays, Visualization, 3D modeling, Image processing, Eye models, Visual process modeling, Feature extraction, Data modeling, 3D visualizations
The human visual attention mechanism promotes human to acquire the most important cues from large amount of information. However, most methods of simulating human visual attention now focus on 2D display, people know little about how human assign their visual attention under a 3D display. This paper firstly produced a saliency dataset consisting of human eye-fixation data for different 3D scenes under human quick glance, which demonstrated the human visual attention distribution. By analyzing the dataset, an approach based on convolutional neural network for human visual attention prediction under 3D light field display was proposed. The network is composed of three parts which are two-way feature extraction, feature fusion and prediction output. Comparing with saliency prediction models under 2D display devices, our proposed model can predict the distribution of human visual attention under 3D light field more accurately. This research promote further investigation of 3D applications such as 3D device evaluation and 3D content production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.