PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8712, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In large-scale biometric authentication systems such as the US-Visit (USA), a 10-fingerprints scanner which simultaneously captures four fingerprints is used. In traditional systems, specific hand-types (left or right) are indicated, but it is difficult to detect hand-type due to the hand rotation and the opening and closing of fingers. In this paper, we evaluated features that were extracted from hand images (which were captured by a general optical scanner) that are considered to be effective for detecting hand-type. Furthermore, we extended the knowledge to real fingerprint images, and evaluated the accuracy with which it detects hand-type. We obtained an accuracy of about 80% with only three fingers (index, middle, ring finger).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we place some of the traditional biometrics work on fingerprint verification via the fuzzy vault scheme within a cryptographic framework. We show that the breaking of a fuzzy vault leads to decoding of Reed-Solomon codes from random errors, which has been proposed as a hard problem in the cryptography community. We provide a security parameter for the fuzzy vault in terms of the decoding problem, which gives context for the breaking of the fuzzy vault, whereas most of the existing literature measures the strength of the fuzzy vault in terms of its resistance to pre-defined attacks or by the entropy of the vault. We keep track of our security parameter, and provide it alongside ROC statistics. We also aim to be more aware of the nature of the fingerprints when placing them in the fuzzy vault, noting that the distribution of minutiae is far from uniformly random. The results we show provide additional support that the fuzzy vault can be a viable scheme for secure fingerprint verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Our group believes that the evolution of fingerprint capture technology is in transition to include 3-D non-contact fingerprint capture. More specifically we believe that systems based on structured light illumination provide the highest level of depth measurement accuracy. However, for these new technologies to be fully accepted by the biometric community, they must be compliant with federal standards of performance. At present these standards do not exist for this new biometric technology. We propose and define a set of test procedures to be used to verify compliance with the Federal Bureau of Investigation’s image quality specification for Personal Identity Verification single fingerprint capture devices. The proposed test procedures include: geometric accuracy, lateral resolution based on intensity or depth, gray level uniformity and flattened fingerprint image quality. Several 2-D contact analogies, performance tradeoffs and optimization dilemmas are evaluated and proposed solutions are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The widespread deployment of surveillance cameras has raised serious privacy concerns. Many privacy-enhancing schemes have been proposed to automatically redact images of trusted individuals in the surveillance video. To identify these individuals for protection, the most reliable approach is to use biometric signals such as iris patterns as they are immutable and highly discriminative. In this paper, we propose a privacy data management system to be used in a privacy-aware video surveillance system. The privacy status of a subject is anonymously determined based on her iris pattern. For a trusted subject, the surveillance video is redacted and the original imagery is considered to be the privacy information. Our proposed system allows a subject to access her privacy information via the same biometric signal for privacy status determination. Two secure protocols, one for privacy information encryption and the other for privacy information retrieval are proposed. Error control coding is used to cope with the variability in iris patterns and efficient implementation is achieved using surrogate data records. Experimental results on a public iris biometric database demonstrate the validity of our framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Prior research has shown that manually-segmented eyebrows can be used for recognition purposes. However, eyebrow recognition is not as useful without an automated segmentation algorithm. We propose a method to automatically outline the eyebrows in a face using active shape models. We train several models using the images from the Face Recognition Grand Challenge and find that including more landmark points around the eyebrows and including the eyes in the model are beneficial. Our eyebrow active shape model gives a 38.6% improvement over eyebrow segmentation obtained using an open-source face active shape model. When comparing the automatically segmented regions with manual segmentation, we achieve 87% true overlap score with a 12% false overlap score.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a methodology for cross matching color face images and Short Wave Infrared (SWIR) face images reliably and accurately. We first adopt a recently designed Boosted and Improved Local Gabor Pattern (ILGP) encoding and matching technique to encode face images in both visible and SWIR spectral bands. We then apply newly developed feature selection methods to prune irrelevant information in encoded data and to improve performance of the Boosted ILGP. The two newly developed feature selection methods are: (1) Genuine segment score-based thresholding and (2) AdaBoost inspired methods. We further compare the performance of the original Boosted ILGP face recognition method with the performance of the modified method that involves one of the proposed feature selection approaches. Under a general parameter set up, significant performance improvement is observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition is one of the most reliable biometric technologies for identity recognition and verification, but it has not been used in a forensic context because the representation and matching of iris features are not straightforward for traditional iris recognition techniques. In this paper we concentrate on the iris crypt as a visible feature used to represent the characteristics of irises in a similar way to fingerprint minutiae. The matching of crypts is based on their appearances and locations. The number of matching crypt pairs found between two irises can be used for identity verification and the convenience of manual inspection makes iris crypts a potential candidate for forensic applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research in iris recognition has established the impact of non-cosmetic soft contact lenses on the recognition performance of iris matchers. Researchers in Notre Dame demonstrated an increase in False Reject Rate (FRR) when an iris without a contact lens was compared against the same iris with a transparent soft contact lens. Detecting the presence of a contact lens in ocular images can, therefore, be beneficial to iris recognition systems. This study proposes a method to automatically detect the presence of non-cosmetic soft contact lenses in ocular images of the eye acquired in the Near Infrared (NIR) spectrum. While cosmetic lenses are more easily discernible, the problem of detecting non-cosmetic lenses is substantially difficult and poses a significant challenge to iris researchers. In this work, the lens boundary is detected by traversing a small annular region in the vicinity of the outer boundary of the segmented iris and locating candidate points corresponding to the lens perimeter. Candidate points are identified by examining intensity profiles in the radial direction within the annular region. The proposed detection method is evaluated on two databases: ICE 2005 and MBGC Iris. In the ICE 2005 database, a correct lens detection rate of 72% is achieved with an overall classification accuracy of 76%. In the MBGC Iris database, a correct lens detection rate of 70% is obtained with an overall classification accuracy of 66:8%. To the best of our knowledge, this is one of the earliest work attempting to detect the presence of non-cosmetic soft contact lenses in NIR ocular images. The results of this research suggest the possibility of detecting soft contact lenses in ocular images but highlight the need for further research in this area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel two-stage protection scheme for automatic iris recognition systems against masquerade attacks carried out with synthetically reconstructed iris images is presented. The method uses different characteristics of real iris images to differentiate them from the synthetic ones, thereby addressing important security flaws detected in state-of-the-art commercial systems. Experiments are carried out on the publicly available Biosecure Database and demonstrate the efficacy of the proposed security enhancing approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris recognition is among the highest accuracy biometrics. However, its accuracy relies on controlled high quality capture data and is negatively affected by several factors such as angle, occlusion, and dilation. Non-ideal iris recognition is a new research focus in biometrics. In this paper, we present a gaze estimation method designed for use in an off-angle iris recognition framework based on the ORNL biometric eye model. Gaze estimation is an important prerequisite step to correct an off-angle iris images. To achieve the accurate frontal reconstruction of an off-angle iris image, we first need to estimate the eye gaze direction from elliptical features of an iris image. Typically additional information such as well-controlled light sources, head mounted equipment, and multiple cameras are not available. Our approach utilizes only the iris and pupil boundary segmentation allowing it to be applicable to all iris capture hardware. We compare the boundaries with a look-up-table generated by using our biologically inspired biometric eye model and find the closest feature point in the look-up-table to estimate the gaze. Based on the results from real images, the proposed method shows effectiveness in gaze estimation accuracy for our biometric eye model with an average error of approximately 3.5 degrees over a 50 degree range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In most gait recognition techniques, both static and dynamic features are used to define a subject’s gait signature. In this study, the existence of a relationship between static and dynamic features was investigated. The correlation coefficient was used to analyse the relationship between the features extracted from the “University of Bradford Multi-Modal Gait Database”. This study includes two dimensional dynamic and static features from 19 subjects. The dynamic features were compromised of Phase-Weighted Magnitudes driven by a Fourier Transform of the temporal rotational data of a subject’s joints (knee, thigh, shoulder, and elbow). The results concluded that there are eleven pairs of features that are considered significantly correlated with (p<0.05). This result indicates the existence of a statistical relationship between static and dynamics features, which challenges the results of several similar studies. These results bare great potential for further research into the area, and would potentially contribute to the creation of a gait signature using latent data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the biometrics community, challenge datasets are often released to determine the robustness of state-of-the- art algorithms to conditions that can confound recognition accuracy. In the context of automated human gait recognition, evaluation has predominantly been conducted on video data acquired in the active visible spectral band, although recent literature has explored recognition in the passive thermal band. The advent of sophisticated sensors has piqued interest in performing gait recognition in other spectral bands such as short-wave infrared (SWIR), due to their use in military-based tactical applications and the possibility of operating in nighttime environments. Further, in many operational scenarios, the environmental variables are not controlled, thereby posing several challenges to traditional recognition schemes. In this work, we discuss the possibility of performing gait recognition in the SWIR spectrum by first assembling a dataset, referred to as the WVU Outdoor SWIR Gait (WOSG) Dataset, and then evaluate the performance of three gait recognition algorithms on the dataset. The dataset consists of 155 subjects and represents gait information acquired under multiple walking paths in an uncontrolled, outdoor environment. Detailed experimental analysis suggests the benefits of distributing this new challenging dataset to the broader research community. In particular, the following observations were made: (a) the importance of SWIR imagery in acquiring data covertly for surveillance applications; (b) the difficulty in extracting human silhouettes in low-contrast SWIR imagery; (c) the impact of silhouette quality on overall recognition accuracy; (d) the possibility of matching gait sequences pertaining to different walking trajectories; and (e) the need for developing sophisticated gait recognition algorithms to handle data acquired in unconstrained environments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Palm vein recognition is a relatively new method in biometrics. This paper presents an effective palm vein feature extraction approach for improving the efficiency of palm vein identification. In this paper, relevant preprocessing steps as rotation and extraction of the Region of Interest are presented. In feature extraction, multiple 2D Gabor filters with 4 orientations are employed to extract the phase information on a palm vein image, which is then merged into unique feature according to an encoding rule. Hamming distance is used for vein recognition. Experiments are carried on a selfmade palm vein database. Experimental results show that the method in this paper achieved a higher correct recognition rate and a faster speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Iris biometrics systems rely on analysis of a visual presentation of the human iris, which must be extracted from the periocular region. Topical cosmetics can greatly alter the appearance of the periocular region, and can occlude portions of the iris texture. In this paper, the presence of topical cosmetics is shown to negatively impact the authentic distribution of iris match scores, causing an increase in the false non-match rate at a fixed false match rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In iris recognition systems, it is essential to accurately locate the pupil and the iris. Among segmentation algorithms for systems utilizing near-infrared light, some make the assumption that the pupil is darker than the rest of the image. For this class of algorithms, the red eye effect, which makes the pupil region brighter than the iris, could damage their performance. Other segmentation algorithms use edge information to fit circles, yet noisy images make them inaccurate. Therefore, it is desirable to use different segmentation algorithms for images with and without the red eye effect. In this paper, we introduce a novel method which distinguishes iris images exhibiting the red eye effect from those with a dark pupil. Our detector starts with a 2D darkness map of the iris image, and generates a customized shape context descriptor from the estimated pupil region. The descriptor is then compared with the reference descriptor, generated from a number of training images with dark pupils. The distance to the reference descriptor is used to define how close the estimated pupil region is from a dark pupil. Tests with images captured with our own acquisition system shows the proposed pupil detector is highly effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ear Recognition has recently received significant attention in the literature. Even though current ear recognition systems have reached a certain level of maturity, their success is still limited. This paper presents an efficient complete ear-based biometric system that can process five frames/sec; Hence it can be used for surveillance applications. The ear detection is achieved using Haar features arranged in a cascaded Adaboost classifier. The feature extraction is based on dividing the ear image into several blocks from which Local Binary Pattern feature distributions are extracted. These feature distributions are then fused at the feature level to represent the original ear texture in the classification stage. The contribution of this paper is three fold: (i) Applying a new technique for ear feature extraction, and studying various optimization parameters for that technique; (ii) Presenting a practical ear recognition system and a detailed analysis about error propagation in that system; (iii) Studying the occlusion effect of several ear parts. Detailed experiments show that the proposed ear recognition system achieved better performance (94:34%) compared to other shape-based systems as Scale-invariant feature transform (67:92%). The proposed approach can also handle efficiently hair occlusion. Experimental results show that the proposed system can achieve about (78%) rank-1 identification, even in presence of 60% occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner’s attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the use of biometrics becomes more wide-spread, the privacy concerns that stem from the use of biometrics are becoming more apparent. As the usage of mobile devices grows, so does the desire to implement biometric identification into such devices. A large majority of mobile devices being used are mobile phones. While work is being done to implement different types of biometrics into mobile phones, such as photo based biometrics, voice is a more natural choice. The idea of voice as a biometric identifier has been around a long time. One of the major concerns with using voice as an identifier is the instability of voice. We have developed a protocol that addresses those instabilities and preserves privacy. This paper describes a novel protocol that allows a user to authenticate using voice on a mobile/remote device without compromising their privacy. We first discuss the Vaulted Verification protocol, which has recently been introduced in research literature, and then describe its limitations. We then introduce a novel adaptation and extension of the Vaulted Verification protocol to voice, dubbed Vaulted Voice Verification (V3). Following that we show a performance evaluation and then conclude with a discussion of security and future work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Describable visual attributes are a powerful way to label aspects of an image, and taken together, build a detailed representation of a scene's appearance. Attributes enable highly accurate approaches to a variety of tasks, including object recognition, face recognition and image retrieval. An important consideration not previously addressed in the literature is the reliability of attribute classifiers as the quality of an image degrades. In this paper, we introduce a general framework for conducting reliability studies that assesses attribute classifier accuracy as a function of image degradation. This framework allows us to bound, in a probabilistic manner, the input imagery that is deemed acceptable for consideration by the attribute system without requiring ground truth attribute labels. We introduce a novel differential probabilistic model for accuracy assessment that leverages a strong normalization procedure based on the statistical extreme value theory. To demonstrate the utility of our framework, we present an extensive case study using 64 unique facial attributes, computed on data derived from the Labeled Faces in the Wild (LFW) data set. We also show that such reliability studies can result in significant compression benefits for mobile applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Researchers in face recognition have been using Gabor filters for image representation due to their robustness to complex variations in expression and illumination. Numerous methods have been proposed to model the output of filter responses by employing either local or global descriptors. In this work, we propose a novel but simple approach for encoding Gradient information on Gabor-transformed images to represent the face, which can be used for identity, gender and ethnicity assessment. Extensive experiments on the standard face benchmark FERET (Visible versus Visible), as well as the heterogeneous face dataset HFB (Near-infrared versus Visible), suggest that the matching performance due to the proposed descriptor is comparable against state-of-the-art descriptor-based approaches in face recognition applications. Furthermore, the same feature set is used in the framework of a Collaborative Representation Classification (CRC) scheme for deducing soft biometric traits such as gender and ethnicity from face images in the AR, Morph and CAS-PEAL databases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Face is one of the most popular biometric modalities. However, up to now, color is rarely actively used in face recognition. Yet, it is well-known that when a person recognizes a face, color cues can become as important as shape, especially when combined with the ability of people to identify the color of objects independent of illuminant color variations. In this paper, we examine the feasibility and effect of explicitly embedding illuminant color information in face recognition systems. We empirically examine the theoretical maximum gain of including known illuminant color to a 3D-2D face recognition system. We also investigate the impact of using computational color constancy methods for estimating the illuminant color, which is then incorporated into the face recognition framework. Our experiments show that under close-to-ideal illumination estimates, one can improve face recognition rates by 16%. When the illuminant color is algorithmically estimated, the improvement is approximately 5%. These results suggest that color constancy has a positive impact on face recognition, but the accuracy of the illuminant color estimate has a considerable effect on its benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the most popular form of biometrics is face recognition. Face recognition techniques typically assume that a face exhibits Lambertian reectance. However, a face often exhibits prominent specularities, especially in outdoor environments. These specular highlights can compromise an identity authentication. In this work, we analyze the impact of such highlights on a 3D-2D face recognition system. First, we investigate three different specularity removal methods as preprocessing steps for face recognition. Then, we explicitly model facial specularities within the face detection system with the Cook-Torrance reflectance model. In our experiments, specularity removal increases the recognition rate on an outdoor face database by about 5% at a false alarm rate of 10-3. The integration of the Cook-Torrance model further improves these results, increasing the verification rate by 19% at a FAR of 10-3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a novel method to enhance low quality images. Specifically, we focus on facial images. Low quality images are often degraded by motion artifacts, sensor limitations, and noise contamination leading to loss of higher order information that is essential for face recognition. First, we demonstrate that conventional denoising and deblurring methods are not able to fully recover the latent image resulting in residual artifacts in the image. Then, we present a novel approach for image enhancement that removes these residual artifacts using sparse encoding methods. The potential of the method is demonstrated through promising results on facial images for face recognition application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.