PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This study considers face recognition using multiple imaging
modalities. Face recognition is performed using a PCA-based
algorithm on each of three individual modalities: normal 2D
intensity images, range images representing 3D shape, and infra-red
images representing the pattern of heat emission. The algorithm is
separately tuned for each modality. For each modality, the gallery
consists of one image of each of the same 127 persons, and the probe
set consists of 297 images of these subjects, acquired with one or
more week's time lapse. In this experiment, we find a rank-one
recognition rate of 71% for infra-red, 91% for 2D, 92% for 3D.
We also consider the multi-modal combination of each pair of
modalities, and find a rank-one recognition rate of 97% for 2D plus
infra-red, 98% for 3D plus infra-red, and 99% for 3D plus 2D. The
combination of all three modalities yields a rank-one recognition
rate of 100%. We conclude that multi-modal face recognition
appears to offer great potential for improved accuracy over using a
single 2D image. Larger and more challenging experiments are needed
in order to explore this potential.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric verification has attracted attention recently because it is more secure than knowledge- or token-based verification techniques. Multi-modal biometric verification can provide even greater accuracy by combining several forms of biometrics. However, there are problems with the availability, usability and acceptability of the technique. In this paper, we take a new approach in proposing a multi-modal biometric system that enables users to select which biometrics they prefer to be matched at the time of verification. This system also reduces the number of inputs required by adopting a sequential test based on statistical methods. In addition, the accuracy of the system can be controlled according to the security level required. We demonstrated the effectiveness of the proposed system experimentally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper investigates the performance improvement for palmprint authentication using multiple classifiers. The proposed methods on personal authentication using palmprints can be divided into three categories; appearance- , line -, and texture-based. A combination of these approaches can be used to achieve higher performance. We propose to simultaneously extract palmprint features from PCA, Line detectors and Gabor-filters and combine their corresponding matching scores. This paper also investigates the comparative performance of simple combination rules and the hybrid fusion strategy to achieve performance improvement. Our experimental results on the database of 100 users demonstrate the usefulness of such approach over those based on individual classifiers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint verification has been deployed in a variety of
security applications. Traditional minutiae detection based
verification algorithms do not utilize the rich discriminatory
texture structure of fingerprint images. Furthermore, minutiae
detection requires substantial improvement of image quality and is
thus error-prone. In this paper, we propose an algorithm for
fingerprint verification using the statistics of subbands from
wavelet analysis. One important feature for each frequency subband
is the distribution of the wavelet coefficients, which can be
modeled with a Generalized Gaussian Density (GGD) function. A
fingerprint verification algorithm that combines the GGD
parameters from different subbands is proposed to match two
fingerprints. The verification algorithm in this paper is tested
on a set of 1,200 fingerprint images. Experimental results
indicate that wavelet analysis provides useful features for the
task of fingerprint verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Despite advances in fingerprint identification techniques, matching incomplete or partial fingerprints still poses a difficult challenge. While the introduction of compact silicon chip-based sensors that capture only a part of the fingerprint area have made this problem important from a commercial perspective, there is also considerable interest on the topic for processing partial and latent fingerprints obtained at crime scenes. Attempts to match partial fingerprints using singular ridge structures-based alignment techniques fail when the partial print does not include such structures (e.g., core or delta). We present a multi-path fingerprint matching approach that utilizes localized secondary features derived using only the relative information of minutiae. Since the minutia-based fingerprint representation, is an ANSI-NIST standard, our approach has the advantage of being directly applicable to already existing databases. We also analyze the vulnerability of partial fingerprint identification systems to brute force attacks. The described matching approach has been tested on one of FVC2002’s DB1 database11. The experimental results show that our approach achieves an equal error rate of 1.25% and a total error rate of 1.8% (with FAR at 0.2% and FRR at 1.6%).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The solid state fingerprint sensors are small in size and can be easily installed on mobile devices. However, the small contact area limits the number of collected minutiae, making the fingerprint matching less reliable. Recently, template synthesis of fingerprints is proposed to augment the available minutiae set during registration. However, this approach is not feasible when two fingerprints are severely distorted. In this paper, we propose a novel way of fingerprint template normalization for distortion removal. Instead of performing expensive processing of the fingerprint images, we suggest that the normalization can be applied to the extracted minutiae using the ridge structure gathered during direct gray scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The registration between two point sets is very important in the image recognition. Hough transformation is used to solve this problem in past. In this paper, we use the statistical method to estimate the rotation and translation parameters between two sets. We have shown the effectiveness in our experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Global face recognition methods based on projection (such as principal component analysis) exhibit a well-known sensitivity to face pitch and yaw variations. This paper introduces and tests a new approach to the normalization of "off-frontal" face images and applies it in a principal component analysis (PCA) framework. Our proposed normalization employs two affine transformations of triangular face regions to register selected facial features. Experiments with the technique demonstrate performance improvements over a traditional normalization method when the proposed normalization step is employed. This experiment reports the results based on an image data set containing 665 images of 269 subjects photographed with noticeable face yaw, plus up to 1100 additional training images of subjects with frontal face pose.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an integrated system for face detection, tracking and recognition in complex scenes. The face detector is based on colour skin models, with adaptation to cope with non-stationary colour distributions over time. The face model is tracked along the sequence with a particle filter by comparing its colour histogram with the colour histogram of the sample position by means of the Bhattacharyya coefficient. Face identification is based on statistical deformable models, as Active Shape Models (ASM) and Active Appearance Models (AAM) for feature extraction and a multiclass Support Vector Machine as classifier. We have tested both models with a database of 100 faces verifying the best performance of the AAM model compared with the ASM model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face image is an attractive biometric for person verification and identification because face images can be obtained non-intrusively, even without the knowledge of the subject in some cases. But in the case that the subject is non-cooperative (in the sense of providing a predefined view for testing), the still-to-still face verification method may have difficulty in matching face images from two different view angles. In this paper we propose a still-to-video face verification method based on the optimal tradeoff synthetic discriminant function (OTSDF) filter technology for the scenario where the video sequence of a subject is available for testing. The system consists of face detection and tracking component, frame level face matching component and the evidence accumulation component. We also investigate the part-based correlation method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human facial images provide the demographic information, such as ethnicity and gender. Conversely, ethnicity and gender also play an important role in face-related applications. Image-based ethnicity identification problem is addressed in a machine learning framework. The Linear Discriminant Analysis (LDA) based scheme is presented for the two-class (Asian vs. non-Asian) ethnicity classification task. Multiscale analysis is applied to the input facial images. An ensemble framework, which integrates the LDA analysis for the input face images at different scales, is proposed to further improve the classification performance. The product rule is used as the combination strategy in the ensemble. Experimental results based on a face database containing 263 subjects (2,630 face images, with equal balance between the two classes) are promising, indicating that LDA and the proposed ensemble framework have sufficient discriminative power for the ethnicity classification problem. The normalized ethnicity classification scores can be helpful in the facial identity recognition. Useful as a "soft" biometric, face matching scores can be updated based on the output of ethnicity classification module. In other words, ethnicity classifier does not have to be perfect to be useful in practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face identification must address many practical challenges, including illumination variations (not seen during the testing phase), facial expressions, and pose variations. In most face recognition systems, the recognition process is performed after a face has been located and segmented in the input scene. This face detection and segmentation process however is prone to errors which can lead to partial faces being segmented (sometimes also due to occlusion) for the recognition process. There are also cases where the segmented face includes parts of the scene background as well as the face, affecting the recognition performance. In this paper, we address how these issues can be dealt efficiently with advanced correlation filter designs. We report extensive set of results on the CMU pose, illumination and expressions (PIE) dataset where training filters are designed in two experiments: (1) the training gallery has 3 images from extreme illumination (2) the training gallery has 3 images from near-frontal illumination. In the testing phase however, we test both filters with the whole illumination variations while simultaneously cropping the test images to various sizes. The results show that the advanced correlation filter designs perform very well even with partial face images of unseen illumination variations including reduced-complexity correlation filters such as the Quad-Phase Minimum Average Correlation Energy (QP-MACE) filter that requires only 2 bits/frequency storage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a kernel Fisher Linear Discriminant (FLD) method for face recognition. The kernel FLD method is extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semi-definite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semi-definite Gram matrix, either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. The feasibility of the kernel FLD method with fractional power polynomial models has been successfully tested on face recognition using a FERET data set that contains 600 frontal face images corresponding to 200 subjects. These images are acquired under variable illumination and facial expression. Experimental results show that the kernel FLD method with fractional power polynomial models achieves better face recognition performance than the Principal Component Analysis (PCA) method using various similarity measures, the FLD method, and the kernel FLD method with polynomial kernels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Confidence intervals are an important way to assess and estimate a
parameter. In the case of biometric identification devices,
several approaches to confidence intervals for an error rate have
been proposed. Here we evaluate six of these methods. To complete
this evaluation, we simulate data from a wide variety of parameter
values. This data are simulated via a correlated binary
distribution. We then determine how well these methods do at what
they say they do: capturing the parameter inside the confidence
interval. In addition, the average widths of the various
confidence intervals are recorded for each set of parameters. The
complete results of this simulation are presented graphically for
easy comparison. We conclude by making a recommendation
regarding which method performs best.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we address the issue of producing cancelable biometric templates; a necessary feature in the deployment of any biometric authentication system. We propose a novel scheme that encrypts the training images used to synthesize the single (minimum average correlation energy) filter for biometric authentication. We show theoretically that convolving the training images with any random convolution kernel prior to building the filter does not change the resulting correlation output peak-to-sidelobe ratios, thus preserving the authentication performance. However, different templates can be obtained from the same biometric by varying the convolution kernels thus enabling the cancelability of the templates. We evaluate the proposed method using the illumination subset of the CMU pose, illumination, expressions (PIE) face dataset and show that we are still able to achieve 100% face verification using the proposed encryption scheme. Our proposed method is interesting from a pattern recognition theory point of view, as we are able to 'encrypt' the data and perform recognition in the encrypted domain that performs as well as the unencrypted case, for a variety of encryption kernels; we show analytically that the recognition performance remains invariant to the proposed encryption scheme, while retaining the desired shift-invariance property of correlation filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometric are a powerful technology for identifying humans both locally and at a distance. In order to perform identification or verification biometric systems capture an image of some biometric of a user or subject. The image is then converted mathematical to representation of the person call a template. Since we know that every human in the world is different each human will have different biometric images (different fingerprints, or faces, etc.). This is what makes biometrics useful for identification. However unlike a credit card number or a password to can be given to a person and later revoked if it is compromised and biometric is with the person for life. The problem then is to develop biometric templates witch can be easily revoked and reissued which are also unique to the user and can be easily used for identification and verification. In this paper we develop and present a method to generate a set of templates which are fully unique to the individual and also revocable. By using bases set compression algorithms in an n-dimensional orthogonal space we can represent a give biometric image in an infinite number of equally valued and unique ways. The verification and biometric matching system would be presented with a given template and revocation code. The code will then representing where in the sequence of n-dimensional vectors to start the recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Unique Biometric Identifiers offer a very convenient way for human identification and authentication. In contrast to passwords they have hence the advantage that they can not be forgotten or lost.
In order to set-up a biometric identification/authentication system, reference data have to be stored in a central database. As biometric identifiers are unique for a human being, the derived templates comprise unique, sensitive and therefore private information about
a person. This is why many people are reluctant to accept a system based on biometric identification. Consequently, the stored templates have to be handled with care and protected against misuse [1, 2, 3, 4, 5, 6]. It is clear that techniques from cryptography can be
used to achieve privacy. However, as biometric data are noisy, and cryptographic functions are by construction very sensitive to small changes in their input, and hence one can not apply those crypto techniques straightforwardly. In this paper we show the feasibility of the techniques developed in [5], [6] by applying them to experimental biometric data. As biometric identifier we have choosen the shape of the inner ear-canal, which is obtained by measuring the headphone-to-ear-canal Transfer Functions (HpTFs) which are known to be person dependent [7].
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The increasing demand on enhanced security has led to an unprecedented interest in automated personal identification based on biometrics. Among the various biometric identification methods, iris recognition is widely regarded as the most reliable and is one of the most active research topics in biometrics. Significant progress has been made since the concept of automated iris recognition was first proposed in 1987, not only in research and algorithm development but also in commercial exploitation and practical applications. This paper provides an overview on recent progress in iris recognition and discusses some of the remaining challenges and possible future work in this exciting field.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most state of the art video-based gait recognition algorithms start from binary silhouettes. These silhouettes, defined as foreground regions, are usually detected by background subtraction methods, which results in holes or missed parts due to similarity of foreground and background color, and boundary errors due to video compression artifacts. Errors in low-level representation make it hard to understand the effect of certain conditions, such as surface and time, on gait recognition. In this paper, we present a part-level, manual silhouette database consisting of 71 subjects, over one gait cycle, with differences in surface, shoe-type, carrying condition, and time. We have a total of about 11,000 manual silhouette frames. The purpose of this manual silhouette database is twofold. First, this is a resource that we make available at http://www.GaitChallenge.org for use by the gait community to test and design better silhouette detection algorithms. These silhouettes can also be used to learn gait dynamics. Second, using the baseline gait recognition algorithm, which was specified along with the HumanID Gait Challenge problem, we show that performance from manual silhouettes is similar and only sometimes better than that from automated silhouettes detected by statistical background subtraction. Low performances when comparing sequences with differences in walking surfaces and time-variation are not fully explained by silhouette quality. We also study the recognition power in each body part and show that recognition based on just the legs is equal to that from the whole silhouette. There is also significant recognition power in the head and torso shape.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dramatic rise in identity theft, the ever pressing need to provide convenience in checkout services to attract and retain loyal customers, and the growing use of multi-function signature captures devices in the retail sector provides favorable conditions for the deployment of dynamic signature verification (DSV) in retail settings. We report on the development of a DSV system to meet the needs of the retail sector. We currently have a database of approximately 10,000 signatures collected from 600 subjects and forgers. Previous work at IBM on DSV has been merged and extended to achieve robust performance on pen position data available from commercial point of sale hardware, achieving equal error rates on skilled forgeries and authentic signatures of 1.5% to 4%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present the work developed on off-line signature verification using Hidden Markov Models (HMM). HMM is a well-known technique used by other biometric features, for instance, in speaker recognition and dynamic or on-line signature verification. Our goal here is to extend Left-to-Right (LR)-HMM to the field of static or off-line signature processing using results provided by image connectivity analysis. The chain encoding of perimeter points for each blob obtained by this analysis is an ordered set of points in the space, clockwise around the perimeter of the blob. We discuss two different ways of generating the models depending on the way the blobs obtained from the connectivity analysis are ordered. In the first proposed method, blobs are ordered according to their perimeter length. In the second proposal, blobs are ordered in their natural reading order, i.e. from the top to the bottom and left to right. Finally, two LR-HMM models are trained using the parameters obtained by the mentioned techniques. Verification results of the two techniques are compared and some improvements are proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In today's security conscious society the efficiency of biometric systems has an increasing tendency to replace the classic but less effective keys and passwords. Hand geometry readers are popular biometrics used for acces control and time and attendance applications. One of their weaknesses is vulnerability to spoofing using fake hands (latex, play-doh or dead-hands).
The object of this paper is to design a feature to be added to the hand geometry scanner in order to detect vitality in the hand, reducing the possibilities for spoofing.
This paper demonstrates how the hand reader was successfully spoofed and shows the implementation of the vitality detection feature through an inexpensive but efficient electronic design.
The method used for detection is photo-plethysmography. The Reflectance Sensor built is of original conception. After amplifying, filtering and processing the sensor's signal, a message is shown via an LCD display, concerning the liveness of the hand and the pulse rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous work in our laboratory and others have demonstrated that
spoof fingers made of a variety of materials including silicon,
Play-Doh, clay, and gelatin (gummy finger) can be scanned and
verified when compared to a live enrolled finger. Liveness, i.e.
to determine whether the introduced biometric is coming from a
live source, has been suggested as a means to circumvent attacks
using spoof fingers. We developed a new liveness method based on
perspiration changes in the fingerprint image. Recent results
showed approximately 90% classification rate using different
classification methods for various technologies including optical,
electro-optical, and capacitive DC, a shorter time window and a
diverse dataset. This paper focuses on improvement of the live
classification rate by using a weight decay method during the
training phase in order to improve the generalization and reduce
the variance of the neural network based classifier. The dataset
included fingerprint images from 33 live subjects, 33 spoofs
created with dental impression material and Play-Doh, and fourteen
cadaver fingers. 100% live classification was achieved with 81.8
to 100% spoof classification, depending on the device technology.
The weight-decay method improves upon past reports by increasing
the live and spoof classification rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a method to provide fingerprint vitality
authentication, in order to improve vulnerability of fingerprint
identification systems to spoofing is introduced. The method aims
at detecting 'liveness' in fingerprint scanners by using the
physiological phenomenon of perspiration. A wavelet based approach
is used which concentrates on the changing coefficients using the
zoom-in property of the wavelets. Multiresolution analysis and
wavelet packet analysis are used to extract information from low
frequency and high frequency content of the images respectively.
Daubechies wavelet is designed and implemented to perform the
wavelet analysis. A threshold is applied to the first difference
of the information in all the sub-bands. The energy content of the
changing coefficients is used as a quantified measure to perform
the desired classification, as they reflect a perspiration
pattern. A data set of approximately 30 live, 30 spoof, and 14
cadaver fingerprint images was divided with first half as a
training data while the other half as the testing data. The
proposed algorithm was applied to the training data set and was
able to completely classify 'live' fingers from 'not live'
fingers, thus providing a method for enhanced security and
improved spoof protection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a new biometric technology based on the optical properties of skin. The new technology can perform both identity verification and sample authenticity based on the optical properties of human skin. When multiple wavelengths of light are used to illuminate skin, the resulting spectrum of the diffusely reflected light represents a complex interaction between the structural and chemical properties of the skin tissue. Research has shown that these spectral characteristics are distinct traits of human skin as compared to other materials. Furthermore, there are also distinct spectral differences from person to person. Personnel at Lumidigm have developed a small and rugged spectral sensor using solid-state optical components operating in the visible and very near infrared spectral region (400-940nm) that accurately measures diffusely reflected skin spectra. The sensors are used for both biometric determinations of identity as well as the determination of sample authenticity. This paper will discuss both applications of the technology with emphasis on the use of optical spectra to assure sample authenticity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics is a rapidly developing technology that is to identify a person based on his or her physiological or behavioral characteristics. To ensure the correction of authentication, the biometric system must be able to detect and reject the use of a copy of a biometric instead of the live biometric. This function is usually termed "liveness detection". This paper describes a new method for live face detection. Using structure and movement information of live face, an effective live face detection algorithm is presented. Compared to existing approaches, which concentrate on the measurement of 3D depth information, this method is based on the analysis of Fourier spectra of a single face image or face image sequences. Experimental results show that the proposed method has an encouraging performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes the automatic face recognition method based on the face representation with a 3D mesh, which precisely reflects the geometric features of the specific subject. The mesh model is generated by using nonlinear subdivision scheme and fitting with the 3D point cloud, and describes the deep information of human faces accurately. The effective method for matching two mesh models is developed to decide whether they are from the same person. We test our proposed algorithm on 3D_RMA database, and experimental results and comparisons with others' work show the effectiveness and competitive performance of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes).
Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we describe and evaluate an approach that uses implicit
models of facial features to cope with the problem of recognizing
faces under varying pose. The underlying recognition process attaches
a parameterized model to every enrolled image that allows the parameter controlled transformation of the stored biometric template into miscellaneous poses within a wide range. We also propose a method for accurate automatic landmark localization in conjunction with pose estimation, which is required by the latter approach. The approach is extensible to other problems in the domain of face recognition for instance facial expression. In the experimental section we present an analysis with respect to accuracy and compare the computational effort with the one of a standard approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An energy minimizing snake algorithm that runs over a grid is designed and used to reconstruct high resolution 3D human faces from pairs of stereo images. The accuracy of reconstructed 3D data from stereo depends highly on how well stereo correspondences are established during the feature matching step. Establishing stereo correspondences on human faces is often ill posed and hard to achieve because of uniform texture, slow changes in depth, occlusion, and lack of gradient. We designed an energy minimizing algorithm that accurately finds correspondences on face images despite the aforementioned characteristics. The algorithm helps establish stereo correspondences unambiguously by applying a coarse-to-fine energy minimizing snake in grid format and yields a high resolution reconstruction at nearly every point of the image. Initially, the grid is stabilized using matches at a few selected high confidence edge points. The grid then gradually and consistently spreads over the low gradient regions of the image to reveal the accurate depths of object points. The grid applies its internal energy to approximate mismatches in occluded and noisy regions and to maintain smoothness of the reconstructed surfaces. The grid works in such a way that with every increment in reconstruction resolution, less time is required to establish correspondences. The snake used the curvature of the grid and gradient of image regions to automatically select its energy parameters and approximate the unmatched points using matched points from previous iterations, which also accelerates the overall matching process. The algorithm has been applied for the reconstruction of 3D human faces, and experimental results demonstrate the effectiveness and accuracy of the reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we investigate the use of 3D surface geometry for face recognition and compare it to one based on color map information. The 3D surface and color map data are from the CAESAR anthropometric database. We find that the recognition performance is not very different between 3D surface and color map information using a principal component analysis algorithm. We also discuss the different techniques for the combination of the 3D surface and color map information for multi-modal recognition by using different fusion approaches and show that there is significant improvement in results. The effectiveness of various techniques is compared and evaluated on a dataset with 200 subjects in two different positions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Enhanced Border Security and Visa Entry Reform Act of 2002 requires that the Visa Waiver Program be available only to countries that have a program to issue to their nationals machine-readable passports incorporating biometric identifiers complying with applicable standards established by the International Civil Aviation Organization (ICAO). In June 2002, the New Technologies Working Group of ICAO unanimously endorsed the use of face recognition (FR) as the globally interoperable biometric for machine-assisted identity confirmation with machine-readable travel documents (MRTDs), although Member States may elect to use fingerprint and/or iris recognition as additional biometric technologies. The means and formats are still being developed through which biometric information might be stored in the constrained space of integrated circuit chips embedded within travel documents. Such information will be stored in an open, yet unalterable and very compact format, probably as digitally signed and efficiently compressed images.
The objective of this research is to characterize the many factors that affect FR system performance with respect to the legislated mandates concerning FR. A photograph acquisition environment and a commercial face recognition system have been installed at Mitretek, and over 1,400 images have been collected of volunteers.
The image database and FR system are being used to analyze the effects of lossy image compression, individual differences, such as eyeglasses and facial hair, and the acquisition environment on FR system performance. Images are compressed by varying ratios using JPEG2000 to determine the trade-off points between recognition accuracy and compression ratio. The various acquisition factors that contribute to differences in FR system performance among individuals are also being measured. The results of this study will be used to refine and test efficient face image interchange standards that ensure highly accurate recognition, both for automated FR systems and human inspectors. Working within the M1-Biometrics Technical Committee of the InterNational Committee for Information Technology Standards (INCITS) organization, a standard face image format will be tested and submitted to organizations such as ICAO.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an automated system for human identification using dental radiographs. The goal of the system is to automatically archive dental images and enable identification based on shapes of the teeth in bitewing images. During archiving, the system builds the antemortem (AM) database, where it segments the teeth and the bones, separates each tooth into crown and root, and stores the contours of the teeth in the database. During retrieval, given a dental image of a postmortem (PM), the proposed system identifies the person from the AM database by automatically matching extracted teeth contours from the PM image to the teeth contours in the AM database. Experiments on a small database show that our method is effective for teeth segmentation, separation of teeth into crowns and roots, and matching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we describe a new behavioural biometric technique
based on human computer interaction. We developed a system that
captures the user interaction via a pointing device, and uses this
behavioural information to verify the identity of an individual.
Using statistical pattern recognition techniques, we developed a sequential classifier that processes user interaction, according to which the user identity is considered genuine if a predefined accuracy level is achieved, and the user is classified as an impostor otherwise. Two statistical models for the features were
tested, namely Parzen density estimation and a unimodal distribution. The system was tested with different numbers of users in order to evaluate the scalability of the proposal. Experimental results show that the normal user interaction with the computer via a pointing device entails behavioural information with discriminating power, that can be explored for identity authentication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper on multi-modal continuous identity authentication contains four main sections. The first section describes the security issue we are addressing with the use of continuous identity authentication and describes the research our laboratory has been doing with different types of passive physiological sensors and how the multi-modal sensor data can be applied to continuous identity authentication. The second section describes a pilot study measuring temperature, GSR, eye movement, blood flow and click pressure of thirteen subjects performing a computer task. The third section gives preliminary results that show continuous authentication of identity above 80 percent was possible using discriminant analysis with a limited set of all of the measures for all but two subjects. The fourth section discusses the results and the potential of continuous identity authentication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of the current networked society, personal identification based on biometrics has received more and more attention. Iris recognition has a satisfying performance due to its high reliability and non-invasion. In an iris recognition system, preprocessing, especially iris localization plays a very important role. The speed and performance of an iris recognition system is crucial and it is limited by the results of iris localization to a great extent. Iris localization includes finding the iris boundaries (inner and outer) and the eyelids (lower and upper). In this paper, we propose an iris localization algorithm based on texture segmentation. First, we use the information of low frequency of wavelet transform of the iris image for pupil segmentation and localize the iris with a differential integral operator. Then the upper eyelid edge is detected after eyelash is segmented. Finally, the lower eyelid is localized using parabolic curve fitting based on gray value segmentation. Extensive experimental results show that the algorithm has satisfying performance and good robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Passengers with immigrant Visa's are a major concern to the International Airports due to the various fraud operations identified. To curb tampering of genuine Visa, the Visa's should contain human identification information. Biometric characteristic is a common and reliable way to authenticate the identity of an individual [1]. A Multimodal Biometric Human Identification System (MBHIS) that integrates iris code, DNA fingerprint, and the passport number on the Visa photograph using digital watermarking scheme is presented. Digital Watermarking technique is well suited for any system requiring high security [2]. Ophthalmologists [3], [4], [5] suggested that iris scan is an accurate and nonintrusive optical fingerprint. DNA sequence can be used as a genetic barcode [6], [7]. While issuing Visa at the US consulates, the DNA sequence isolated from saliva, the iris code and passport number shall be digitally watermarked in the Visa photograph. This information is also recorded in the 'immigrant database'. A 'forward watermarking phase' combines a 2-D DWT transformed digital photograph with the personal identification information. A 'detection phase' extracts the watermarked information from this VISA photograph at the port of entry, from which iris code can be used for identification and DNA biometric for authentication, if an anomaly arises.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human identification technologies are important threat countermeasures in minimizing select infrastructure vulnerabilities. Properly targeted countermeasures should be selected and integrated into an overall security solution based on disciplined analysis and modeling. Available data on infrastructure value, threat intelligence, and system vulnerabilities are carefully organized, analyzed and modeled. Prior to design and deployment of an effective
countermeasure; the proper role and appropriateness of technology in addressing the overall set of vulnerabilities is established. Deployment of biometrics systems, as with other countermeasures, introduces potentially heightened vulnerabilities into the system. Heightened vulnerabilities may arise from both the newly introduced system complexities and an unfocused understanding of the set of vulnerabilities impacted by the new countermeasure. The countermeasure's own inherent vulnerabilities and those introduced by the system's integration with the existing system are analyzed and modeled to determine the overall vulnerability impact. The United States infrastructure is composed of government and private assets. The infrastructure is valued by their potential impact on several components: human physical safety, physical/information replacement/repair cost, potential contribution to future loss (criticality in weapons production), direct productivity output, national macro-economic output/productivity, and information integrity. These components must be considered in determining the overall impact of an infrastructure security breach. Cost/benefit
analysis is then incorporated in the security technology deployment decision process. Overall security risks based on system vulnerabilities and threat intelligence determines areas of potential benefit. Biometric countermeasures are often considered when additional security at intended points of entry would minimize vulnerabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quality of a fingerprint is essential to the performance of AFIS (Automatic Fingerprint Identification System). Such a quality may be classified by clarity and regularity of ridge-valley structures.1,2 One may calculate thickness of ridge and valley to measure the clarity and regularity. However, calculating a thickness is not feasible in a poor quality image, especially, severely damaged images that contain broken ridges (or valleys). In
order to overcome such a difficulty, the proposed approach employs the statistical properties in a local block, which involve the mean and spread of the thickness of both ridge and valley. The mean value is used for determining whether a fingerprint is wet or dry. For example, the black pixels are dominant if a fingerprint is wet, the average thickness of ridge is larger than one of valley, and vice versa on a dry fingerprint. In addition, a standard deviation is used for determining severity of damage. In this study, the quality is divided into three categories based on two statistical properties mentioned above: wet, good, and dry. The number of low quality
blocks is used to measure a global quality of fingerprint. In addition, a distribution of poor blocks is also measured using Euclidean distances between groups of poor blocks. With this scheme, locally condensed poor blocks decreases the overall quality of an image. Experimental results on the fingerprint images captured by
optical devices as well as by a rolling method show the wet and dry parts of image were successfully captured. Enhancing an image by employing morphology techniques that modifying the detected poor quality blocks is illustrated in section 3. However, more work needs to be done on designing a scheme to incorporate the number of poor blocks and their distributions for a global quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of biometrics technology, the recognition of human-face becomes the most acceptant way of identification. In the recent thirty years, face recognition technology gets more and more attentions. But unfortunately, most human-face recognition systems with a large-scale facial image database can’t be put into practice just because they have not enough recognition speed and precision. As a matter of fact, the recognition time will drastically increase as the number of human-face increases. In order to improve the recognition rates, we can firstly classify the large-scale facial image database into several comparatively small classes with specific criterion, and then begin recognition in the next step. If the classified class is still too big for recognition, another classification could be put into practice with other specific criterion until it adapts to recognition. This method is named as Multi-Layer Classification Method (MLCM) in our paper. In order to classify an unclassified face into a small class, a multiclass classifier must be set up. Because that the mahalanobis distance classifier follows the normal distribution, it is employed in our study. The results have shown that the integrative recognition rates have drastically increased for the large-scale facial image database.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Law enforcement agencies have been exploiting biometric identifiers for decades as key tools in forensic identification. With the evolution in information technology and the huge volume of cases that need to be investigated by forensic specialists, it has become important to automate forensic identification systems.
While, ante mortem (AM) identification, that is identification prior to death, is usually possible through comparison of many biometric identifiers, postmortem (PM) identification, that is identification after death, is impossible using behavioral biometrics (e.g. speech, gait). Moreover, under severe circumstances, such as those encountered in mass disasters (e.g. airplane crashers) or if identification is being attempted more than a couple of weeks postmortem, under such circumstances, most physiological biometrics may not be employed for identification, because of the decay of soft tissues of the body to unidentifiable states. Therefore, a postmortem biometric identifier has to resist the early decay that affects body tissues. Because of their survivability and diversity, the best candidates for postmortem biometric identification are the dental features.
In this paper we present an over view about an automated dental identification system for Missing and Unidentified Persons. This dental identification system can be used by both law enforcement and security agencies in both forensic and biometric identification. We will also present techniques for dental segmentation of X-ray images. These techniques address the problem of identifying each individual tooth and how the contours of each tooth are extracted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Singular values (SVs) feature vectors of face image have been used for face recognition as the feature recently. Although SVs have some important properties of algebraic and geometric invariance and insensitiveness to noise, they are the representation of face image in its own eigen-space spanned by the two orthogonal matrices of singular value decomposition (SVD) and clearly contain little useful information for face recognition. This study concentrates on extracting more informational feature from a frontal and upright view image based on SVD and proposing an improving method for face recognition. After standardized by intensity normalization, all training and testing face images are projected onto a uniform eigen-space that is obtained from SVD of standard face image. To achieve more computational efficiency, the dimension of the uniform eigen-space is reduced by discarding the eigenvectors that the corresponding eigenvalue is close to zero. Euclidean distance classifier is adopted in recognition. Two standard databases from Yale University and Olivetti research laboratory are selected to evaluate the recognition accuracy of the proposed method. These databases include face images with different expressions, small occlusion, different illumination condition and different poses. Experimental results on the two face databases show the effectiveness of the method and its insensitivity to the face expression, illumination and posture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an on-line handwriting authentication system for text-independent Chinese handwriting. The proposed strategy is implemented on the stroke level, and the writing strokes and interstrokes are separated stepwise. The writing features are extracted from the dynamics of substrokes and interstrokes, including the writing velocity, the pressure, and the angle between the pen and the writing surface. To alleviate the effect of writing character number on the performance of the algorithm, we adopt the feature vectors of selected dimensions. In live experiments the authentication result is promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of image processing as a pre-processing step to methods of face recognition can significantly improve recognition accuracy. However, different image processing techniques provide different advantages, enhancing specific features or normalising certain capture conditions. We introduce a new method of isolating these useful qualities from a range of image subspaces using Fisher's linear discriminant and combining them to create a more effective image subspace, utilising the advantages offered by numerous image processing techniques and ultimately reducing recognition error. Systems are evaluated by performing up to 258,840 verification operations on a large test set of images presenting typical difficulties when performing recognition. Results are presented in the form of error rate curves, showing false acceptance rate (FAR) vs. false rejection rate (FRR), generated by varying a decision threshold applied to the euclidean distance metric performed in combined face space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a scheme of designing an apparatus of digital certificate to replace the traditional digital certificate based on password. The apparatus uses the fingerprint identification technique to validate the user and enhance the security of the digital certificate. The safe microprocessor and SRAM in the apparatus prevent the digital certificate from being duplicated. In addition, the software/hardware cryptography and interaction with external computers through the USB interface make the digital certificate more secure than traditional digital certificate techniques based on password.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The performance of any fingerprint recognizer highly depends on the
fingerprint image quality. Different types of noises in the fingerprint images pose greater difficulty for recognizers.
Most Automatic Fingerprint Identification Systems (AFIS) use some form of image enhancement. Although several methods have been described in the literature, there is still scope for improvement. In particular, effective methodology of cleaning the valleys between the ridge contours are lacking. We observe that noisy valley pixels and the pixels in the interrupted ridge flow gap are "impulse noises". Therefore, this paper describes a new approach to fingerprint image enhancement, which is based on integration of Anisotropic Filter and directional median filter(DMF). Gaussian-distributed noises are reduced effectively by Anisotropic Filter, "impulse noises" are
reduced efficiently by DMF. Usually, traditional median filter is the most effective method to remove pepper-and-salt noise and other small artifacts, the proposed DMF can not only finish its original tasks, it can also join broken fingerprint ridges, fill out the holes of fingerprint images, smooth irregular ridges as well as remove some annoying small artifacts between ridges. The enhancement algorithm has been implemented and tested on fingerprint images from FVC2002. Images of varying quality have been used to evaluate the performance of our approach. We have compared our method with other methods described in the literature in terms of matched minutiae, missed
minutiae, spurious minutiae, and flipped minutiae(between end points and bifurcation points). Experimental results show our method to be superior to those described in the literature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a MAP-MRF (Maximum A posteriori Probability estimation of Markov Random Field) model using the relational structures among features for geometric transformation in fingerprint identification. Additionally, relational projected distances, a novel derivative features among a pair of minutiae, is used for transformation-invariant description. The promising experimental results of the method show that it is effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a novel algorithm to quickly detect faces from still gray level images. The algorithm first detects the upper eyelid pixels using the eye region's gray and gradient information, which can be computed by the integral image very quickly. Then according to the anthropological characteristic of human face, some rules are used to select eyes and the pairs of eyes candidates from the detected eyelid pixels. Finally, a hierarchy method is proposed to verify whether a pair of eyes candidate corresponds to a true face. Experiment on the BioID face database shows that the proposed method is very robust to detect the upper eyelid pixels and the rate of face detection is satisfactory. In terms of accuracy of eye detection, the proposed method outperforms other methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper utilizes the intra-difference in still images to segment a face from its background and then combines the intra-difference detection result with the eigenface/eigenfeature methods to identify the face. This novel diverse scheme can finally solve the problem of accuracy in practical applications, thus broadening the application of face recognition into more versatile situations such as security building entrance, customs and mug spotting. The organic combination of intra-difference detection method and eigenface/eigenfeature methods into one system is shown to be more robust and have a better identification rate than either method alone. This paper first addresses the problems of the real-time accuracy issue and the need of pre-processing (mainly normalization). And then it proposes to use intra-difference to effectively segment a human face. The segmented face is further processed by both intra-difference detection method and eigenface/eigenfeature methods to determine its identity. Correspondingly, the proposed algorithm consists of three parts: segmentation, pre-processing, and multi-phase face identification by fusing the results from both the intra-difference detection method and the eigenface/eigenfeature methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes the results of a collaborative effort between the University of New Hampshire (UNH) and the Mitretek Systems (MTS) Center for Criminal Justice Technology (CCJT). Mitretek conducted an investigation into the impact of anticipated biometrically encoded driver licenses (DLs) on law enforcement. As part of this activity, Mitretek teamed with UNH to leverage the results of UNH's Project54 and develop a pilot Driver License Interoperability Test Bed to explore both implementation and operational aspects associated with reading and authenticating biometrically encoded DLs in law enforcement scenarios. The test bed enables the exploration of new methods, techniques (both hardware and software), and standards in a structured fashion. Spearheaded by the American Association of Motor Vehicle Administrators (AAMVA) and the International Committee for Information Technology Standards Technical Group M1 (INCITS-M1) initiatives, standards involving both DLs and biometrics, respectively, are evolving at a rapid pace. In order to ensure that the proposed standards will provide for interstate interoperability and proper functionality for the law enforcement community, it is critical to investigate the implementation and deployment issues surrounding biometrically encoded DLs. The test bed described in this paper addresses this and will provide valuable feedback to the standards organizations, the states, and law enforcement officials with respect to implementation and functional issues that are exposed through exploration of actual test systems. The knowledge gained was incorporated into a report prepared by MTS to describe the anticipated impact of biometrically encoded DLs on law enforcement practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel approach to iris recognition is proposed in this paper. It differs from traditional iris recognition systems in that it generates a one-dimensional iris signature that is translation, rotation, illumination and scale invariant. The Du Measurement was used as a matching mechanism, and this approach generates the most probable matches instead of only the best match. The merit of this method is that it allows users to enroll with or to identify poor quality iris images that would be rejected by other methods. In this way, the users could potentially identify an iris image by another level of analysis. Another merit of this approach is that this method could potentially improve iris identification efficiency. In our approach, the system only needs to store a one-dimensional signal, and in the matching process, no circular rotation is needed. This means that the matching speed could be much faster.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics has become an increasingly important part of the overall set of tools for securing a wide range of facilities, areas, information, and environments. At the core of any biometric verification/identification technique lies the development of the algorithm itself. Much research has been performed in this area to varying degrees of success, and it is well recognized within the biometrics community that substantial room for improvement exists. The focus of this paper is to describe ongoing biometrics algorithm development efforts by the authors. An overview of the data collection, algorithm development, and testing efforts is described. The focus of the research is to develop core algorithmic concepts that serve as the basis for robust techniques in both the face and speech modalities. A broad overview of the methodology is provided with some sample results.
The end goal is to have a robust, modular set of tools which can balance complexity and need for accuracy and robustness for a wide variety of applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Effortless and robust recognition of human faces in video sequences is a practically important, but technically very challenging problem, especially in the presence of pose and lighting variability. Here we study the statistical structure of one such sequence and observe that images of both facial features and the full head lie on low-dimensional manifolds that are embedded in very high-dimensional spaces. We apply IndependentManifold Analysis (IMA) to learn these manifolds and use them to track local features to sub-pixel accuracy. We utilize sub-pixel resampling, which allows a very smooth
estimate of head pose. In the process, we learn a manifold model of the head and use it to partially compensate for pose. Finally, in experiments on the standard FERET database, we report that this pose compensation results in more than an order of magnitude reduction of the equal error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel polarization-encoded fingerprint verification system for high-security applications is presented. During the enrolment part the fingerprint pattern is encoded with certain parameters of polarization ellipse. We show through experimental and simulation results that polarization encoding of fingerprint images can play significant role in fingerprint verification especially in application where high security is of a great concern.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face authentication involves capturing the face images and representing the image suitable for matching with the reference template. In this paper, we discuss a new representation for matching that involves processing the image using 1-D processing that offers potential speed improvements over more conventional 2-D processing methods. Although the test application that is being considered here is to access a computer using face verification, this method can be used in other face verification applications. In this application, the subject is assumed to be cooperative, and the environment for capturing the face images is somewhat controlled. The proposed 1-D processing helps to locate the eyes, which in turn helps to normalize the face image for representation and matching. 1-D eigenanalysis is performed on the normalized face image to derive the eigenvectors. The face image is represented using components projected onto these eigenvectors. The 1-D PCA provides advantages over the conventional 2-D PCA in terms of providing a better model of the face in practical situations and providing robustness to local changes in the authentic images. We show that matching a test image with a reference image using the eigencomponents improves the discrimination between genuine and impostor face images. Our studies show good performance and it seems possible to obtain in practice an equal error rate (EER) close to zero.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel kernel-based fusion strategy is presented. It is based on SVM classifiers, trade-off coefficients introduced in the standard SVM training and testing procedures, and quality measures of the input biometric signals. Experimental results on a prototype application based on voice and fingerprint traits are reported. The benefits
of using the two modalities as compared to only using one of them are revealed. This is achieved by using a novel experimental procedure in which multi-modal verification performance tests are compared with multi-probe tests of the individual subsystems. Appropriate selection of the parameters of the proposed quality-based scheme leads to a quality-based fusion scheme outperforming the raw fusion strategy without considering quality signals. In particular, a relative improvement of 18% is obtained for small SVM training set size by using only fingerprint quality labels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an application of fractal dimensions to speech processing and speaker identification. There are several dimensions that can be used to characterize speech signals such as box dimension, correlation dimension, etc. We are mainly concerned with the generalized dimensions of speech signals as they provide more information than individual dimensions. Generalized dimensions of arbitrary orders are used in speaker identification in this work. Based on the experimental data, the artificial phase space is generated and smooth behavior of correlation integral is obtained in a straightforward and accurate analysis. Using the dimension D(2) derived from the correlation integral, the generalized dimension D(q) of an arbitrary order q is calculated. Moreover, experiments applying the generalized dimension in speaker identification have been carried out. A speaker recognition dedicated Chinese language speech corpus with PKU-SRSC, recorded by Peking University, was used in the experiments. The results are compared to a baseline speaker identification that uses MFCC features. Experimental results have indicated the usefulness of fractal dimensions in characterizing speaker's identity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Biometrics is rapidly gaining acceptance as the technology that can meet the ever increasing need for security in critical applications. Biometric systems automatically recognize individuals based on their physiological and behavioral characteristics. Hence, the fundamental requirement of any biometric recognition system is a human trait having several desirable properties like universality, distinctiveness, permanence, collectability, acceptability, and resistance to circumvention. However, a human characteristic that possesses all these properties has not yet been identified. As a result, none of the existing biometric systems provide perfect recognition and there is a scope for improving the performance of these systems. Although characteristics like gender, ethnicity, age, height, weight and eye color are not unique and reliable, they provide some information about the user. We refer to these characteristics as "soft" biometric traits and argue that these traits can complement the identity information provided by the primary biometric identifiers like fingerprint and face. This paper presents the motivation for utilizing soft biometric information and analyzes how the soft biometric traits can be automatically extracted and incorporated in the decision making process of the primary biometric system. Preliminary experiments were conducted on a fingerprint database of 160 users by synthetically generating soft biometric traits like gender, ethnicity, and height based on known statistics. The results show that the use of additional soft biometric user information significantly improves (approximately 6%) the recognition performance of the fingerprint biometric system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose off-axis hologram watermarking technology, which could raise the robustness against the personal information's copy, falsification and alteration by making it possible to insert variable type's information and making strong privacy function. This technology should use security keys, hologram's depth information and the information in proportion to the depth information, at same time to extract inserted information and could insert huge information and insert owner or provider's own information with three-dimensional hologram watermark type to digital contents. So, this could have the high security. The proposed technology is strong at several images' processing and information's damage by the usages of hologram’s redundancy and this makes it hard to recognize inserted information visually. Especially, inserting much amount's information with various type is possible with its high privacy function by inserting information to several frequency's transform plane by using Fresnel transform differently with existing watermarking. So, we will show the substantial application's possibility and robustness of Fresnel hologram watermarking technology using like that characteristics through computer's imitation simulation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Considerable progress has been made in face recognition research
over the last decade especially with the development of powerful
models of face appearance (i.e., eigenfaces). Despite the variety
of approaches and tools studied, however, face recognition is not
accurate or robust enough to be deployed in uncontrolled
environments. Recently, a number of studies have shown that
infrared (IR) imagery offers a promising alternative to visible
imagery due to its relative insensitive to illumination changes.
However, IR has other limitations including that it is opaque to
glass. As a result, IR imagery is very sensitive to facial
occlusion caused by eyeglasses. In this paper, we propose fusing
IR with visible images, exploiting the relatively lower
sensitivity of visible imagery to occlusions caused by eyeglasses.
Two different fusion schemes have been investigated in this study:
(1) image-based fusion performed in the wavelet domain and, (2)
feature-based fusion performed in the eigenspace domain. In both
cases, we employ Genetic Algorithms (GAs) to find an optimum
strategy to perform the fusion. To evaluate and compare the
proposed fusion schemes, we have performed extensive recognition
experiments using the Equinox face dataset and the popular method
of eigenfaces. Our results show substantial improvements in
recognition performance overall, suggesting that the idea of
fusing IR with visible images for face recognition deserves
further consideration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.