KEYWORDS: Eye, Biometrics, Received signal strength, Optical proximity correction, Information fusion, Visual system, Detection and tracking algorithms, Control systems, Iris recognition, Behavioral biometrics
This paper presents a template aging study of eye movement biometrics, considering three distinct biometric techniques on multiple stimuli and eye tracking systems. Short-to-midterm aging effects are examined over two-weeks, on a highresolution eye tracking system, and seven-months, on a low-resolution eye tracking system. We find that, in all cases, aging effects are evident as early as two weeks after initial template collection, with an average 28% (±19%) increase in equal error rates and 34% (±12%) reduction in rank-1 identification rates. At seven months, we observe an average 18% (±8%) increase in equal error rates and 44% (±20%) reduction in rank-1 identification rates. The comparative results at two-weeks and seven-months suggests that there is little difference in aging effects between the two intervals; however, whether the rate of decay increases more drastically in the long-term remains to be seen.
This work introduces and evaluates a novel eye movement-driven biometric approach that employs eye fixation density maps for person identification. The proposed feature offers a dynamic representation of the biometric identity, storing rich information regarding the behavioral and physical eye movement characteristics of the individuals. The innate ability of fixation density maps to capture the spatial layout of the eye movements in conjunction with their probabilistic nature makes them a particularly suitable option as an eye movement biometrical trait in cases when free-viewing stimuli is presented. In order to demonstrate the effectiveness of the proposed approach, the method is evaluated on three different datasets containing a wide gamut of stimuli types, such as static images, video and text segments. The obtained results indicate a minimum EER (Equal Error Rate) of 18.3 %, revealing the perspectives on the utilization of fixation density maps as an enhancing biometrical cue during identification scenarios in dynamic visual environments.
The widespread use of computers throughout modern society introduces the necessity for usable and counterfeit-resistant
authentication methods to ensure secure access to personal resources such as bank accounts, e-mail, and social media.
Current authentication methods require tedious memorization of lengthy pass phrases, are often prone to shouldersurfing,
and may be easily replicated (either by counterfeiting parts of the human body or by guessing an authentication
token based on readily available information). This paper describes preliminary work toward a counterfeit-resistant
usable eye movement-based (CUE) authentication method. CUE does not require any passwords (improving the
memorability aspect of the authentication system), and aims to provide high resistance to spoofing and shoulder-surfing
by employing the combined biometric capabilities of two behavioral biometric traits: 1) oculomotor plant characteristics
(OPC) which represent the internal, non-visible, anatomical structure of the eye; 2) complex eye movement patterns
(CEM) which represent the strategies employed by the brain to guide visual attention. Both OPC and CEM are extracted
from the eye movement signal provided by an eye tracking system. Preliminary results indicate that the fusion of OPC
and CEM traits is capable of providing a 30% reduction in authentication error when compared to the authentication
accuracy of individual traits.
A delay compensation algorithm is presented for a gaze-contingent video compression system (GCS) with a robust targeted gaze containment (TGC) performance. The TGC parameter allows varying compression levels of a gaze-contingent video stream by controlling its perceptual quality. The delay compensation model is based on the Kalman filter framework that models the human visual system with eye position and velocity data. The model predicts future eye position and constructs a high-quality coded region of interest (ROI) designed to contain a targeted number of gaze samples while reducing perceptual quality in the periphery of that region. Several model parameterization schemes were tested with 21 subjects using a delay range of 0.02 to 2 s and a TGC of 60 to 90%. The results indicate that the model was able to achieve TGC levels with compression of 1.4 to 2.3 times for TGC=90% and compression of 1.8 to 2.5 for TGC=60%. The lowest compression values were recorded for high delays, while the highest compression values were reported during small delays.
In this paper we propose an algorithm for predicting a person's perceptual attention focus (PAtF) through the use of a
Kalman Filter design of the human visual system. The concept of the PAtF allows significant reduction of the bandwidth
of a video stream and computational burden reduction in the case of 3D media creation and transmission. This is
possible due to the fact that the human visual system has limited perception capabilities and only 2 degrees out of the
total of 180 provide the highest quality of perception. The peripheral image quality can be decreased without a viewer
noticing image quality reduction. Multimedia transmission through a network introduces a delay. This delay reduces the
benefits of using a PAtF due to the fact that the person's attention area can change drastically during the delay period,
thus increasing the probability of peripheral image quality reduction being detected. We have created a framework which
uses a Kalman Filter to predict future PAtFs in order to compensate for the delay/lag and to reduce the
bandwidth/creation burden of any visual multimedia.
The possibility of perceptual compression using live eye tracking has been anticipated for some time by many researchers. Among the challenges of real-time eye-gaze-based perceptual video compression are how to handle the fast nature of eye movements with the relative complexity of video transcoding and also take into account the delay associated with transmission in the network. Such a delay requires additional consideration in perceptual encoding because it increases the size of the area that requires high-quality coding. We present a hybrid scheme, one of the first to our knowledge, that combines eye tracking with fast in-line scene analysis to drastically narrow the high acuity area without the loss of eye-gaze containment.
The possibility of perceptual compression using live eye-tracking has been anticipated for some time by many researchers. Among the challenges of real-time eye-gaze based perceptual video compression is how to handle the fast nature of eye movements with a relative complexity of video transcoding and also take into the account a delay
associated with transmission in the network. Such delay requires an additional consideration in perceptual encoding because it increases the size of the area that requires high quality coding. In this paper we present a hybrid scheme, one of the first to our knowledge, which combines eye-tracking with fast in-line scene analysis to drastically narrow down the high acuity area without the loss of eye-gaze containment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.