KEYWORDS: Visualization, Principal component analysis, Diagnostics, Visual process modeling, Visual analytics, Alzheimer's disease, Machine learning, Data processing, Data modeling
Visual attention and its modeling are getting more and more focus during the past decades. It has been used for several years in various fields, such as the automotive industry, robotics, or even in diagnostic medicine. So far, the research has focused mainly on the generalization of the collected data, although on the contrary, the identification of unique features of the visual attention of the individuals remains an open research topic. The aim of this paper is to propose a methodology which is able to cluster people into groups based on individualities in their visual attention patterns. Unlike the former research approaches focused on the classification problem where the class of the subjects is required to be known, we focus our work on the open research problem of unsupervised machine learning based on the measured data about subjects’ visual attention, solely. Our methodology is based on the clustering method which utilizes individual feature vectors created from measured visual attention data. Proposed feature vectors forming up the fingerprint of the attention of an individual are based on the direction of saccades of individuals. Our proposed methodology is designed to work with a limited set of the measured eye-tracking data without any additional information.
There are numerous cues which influence human visual attention. Some of the cues cannot be explored by the conventional eye-tracking studies which makes use of a pictorial data presented to the observers on common displays. Depth perception occurs naturally in the real three-dimensional environment and, therefore, the depth cues are one of them. However, the eye-tracking studies in the real environment and their evaluation are complicated to carry out with a relevant number of participants while maintaining the laboratory conditions. We propose an experimental study methodology for exploring the depth perception tendencies during the free-viewing task on a widescreen display in a laboratory. This method is beyond the current hardware capabilities of the static eye-trackers mounted on the displays. Therefore, the eye-tracking glasses were used in the study to measure the attention data. We carried out the proposed study on a sample of 25 participants and created a novel dataset suitable for further visual attention research. The depth perception tendencies on a widescreen display were evaluated from the acquired data and the results were discussed in the context of the previous similar studies. Our results revealed some differences in the depth perception tendencies in comparison to the previous studies with the two-dimensional pictorial data and resembled some depth perception tendencies observed in the real environment.
An extensive research has been held in the field of the visual attention modelling throughout the past years. However, the egocentric visual attention in real environments has still not been thoroughly studied. We introduce a method proposal for conducting automated user studies on the egocentric visual attention in a laboratory. Goal of our method is to study distance of the objects from the observer (their depth) and its influence on the egocentric visual attention. The user studies based on the method proposal were conducted on a sample of 37 participants and our own egocentric dataset was created. The whole experimental and evaluation process was designed and realized using advanced methods of computer vision. Results of our research are ground-truth values of the egocentric visual attention and their relation to the depth of the scene approximated as a depth-weighting saliency function. The depth-weighting function was applied on the state-of-the-art models and evaluated. Our enhanced models provided better results than the current depthweighting saliency models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.