In this paper we demonstrate the subspace generalization power of the kernel correlation feature analysis (KCFA)
method for extracting a low dimensional subspace that has the ability to represent new unseen datasets. Examining the
portability of this algorithm across different datasets is an important practical aspect of real-world face recognition
applications where the technology cannot be dataset-dependant. In most face recognition literature, algorithms are
demonstrated on datasets by training on one portion of the dataset and testing on the remainder. Generally, the testing
subjects' dataset partially or totally overlap the training subjects' dataset however with disjoint images captured from
different sessions. Thus, some of the expected facial variations and the people's faces are modeled in the training set. In
this paper we describe how we efficiently build a compact feature subspace using kernel correlation filter analysis on the
generic training set of the FRGC dataset and use that basis for recognition on a different dataset. The KCFA feature
subspace has a total dimension that corresponds to the number of training subjects; we chose to vary this number to
include up to all of 222 available in the FRGC generic dataset. We test the built subspace produced by KCFA by
projecting other well-known face datasets upon it. We show that this feature subspace has good representation and
discrimination to unseen datasets and produces good verification and identification rates compared to other subspace and
dimensionality reduction methods such as PCA (when trained on the same FRGC generic dataset). Its efficiency, lower
dimensionality and discriminative power make it more practical and powerful than PCA as a robust lower
dimensionality reduction method for modeling faces and facial variations.
The Face Recognition Grand Challenge (FRGC) dataset is one of the most challenging datasets in the face recognition community, in this dataset we focus on the hardest experiment under the harsh un-controlled conditions. In this paper we compare how other popular face recognition algorithms like Direct Linear Discriminant Analysis (D-LDA) and Gram-Schmidt LDA methods compare to traditional eigenfaces, and fisherfaces. However, we also show that all these linear subspace methods can not discriminate faces well due to large nonlinear distortions in the face images. Thus we present our proposed Class dependence Feature Analysis (CFA) method which we demonstrate to produce superior performance compared to other methods by representing nonlinear features well. We perform this by extending the traditional CFA framework to use Kernel Methods and propose a proper choice of kernel parameters which improves the overall face recognition performance is significantly over the competing face recognition algorithms. We present results of this proposed approach on a large scale database from the Face Recognition Grand Challenge (FRGC)v2 which contains over 36,000 images focusing on Experiment 4 which poses the harshest scenario containing images captured under un-controlled indoor and outdoor conditions yielding significant illumination variations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.