The periocular region is considered as a relatively new modality of biometrics and serves as a substitute solution for face recognition with occlusion. Moreover, many application scenarios occur at nighttime, such as nighttime surveillance. To address this problem, we study the topic of periocular recognition at nighttime using the infrared spectrum. Utilizing a simplified version of DeepFace, a convolutional neural networks designed for face recognition, we investigate nighttime periocular recognition at both short and long standoffs, namely 1.5 m, 50 m and 106 m. A subband of the active infrared spectrum { near-infrared (NIR) { is involved. During generation of the periocular dataset, preprocessing is conducted on the original face images, including alignment, cropping and intensity conversion. The verification results of the periocular region using DeepFace are compared with the results of two conventional methods { LBP and PCA. Experiments have shown that the DeepFace algorithm performs fairly well (with GAR over 90% at FAR=0.1%) using the periocular region as a modality even at nighttime. The framework also shows superiority to both LBP and PCA in all cases of different light wavelengths and standoffs.
Cross-spectral matching of active infrared (IR) facial probes to a visible light facial gallery is a new challenging problem. This scenario is brought up by a number of real-world surveillance tasks such as recognition of subjects at night or under severe atmospheric conditions. When combined with long distance, this problem becomes even more challenging due to deteriorated quality of the IR data, causing another issue called image quality disparity between the visible light and the IR imagery. To address this quality disparity in the heterogeneous images due to atmospheric and camera effects - typical degrading factors observed in long range IR data, we propose an image fusion-based method which fuses multiple IR facial images together and yields a higher-quality IR facial image. Wavelet decomposition using the Harr basis is conducted first and then the coefficients are merged according to a rule that treats the high and low frequencies differently, followed by an inverse wavelet transform step to reconstruct the final higher-quality IR facial image. Two sub-bands of the IR spectrum, namely short-wave infrared (SWIR) and near-infrared (NIR), as well as two different long standoffs of 50 m and 106 m are involved. Experiments show that in all cases of different sub-bands and standoffs our image fusion-based method outperforms the one without image fusion, with GARs significantly increased by 3.51% and 1.09% for SWIR 50 m and NIR 50 m at FAR=10%, respectively. The equal error rates are reduced by 2.61% and 0.90% for SWIR 50 m and NIR 50 m, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.