With the improvement in the quality of spoofed faces, there are significant challenges in terms of the security of face recognition systems. At present, methods based on multi-modality can further protect systems from spoofing. However, such approaches generally have a considerably large amount of parameters and calculations. To overcome such problems, a lightweight face anti-spoofing method based on cross-fused multi-modal features was proposed. Firstly, the lightweight network ShuffleNetV2 is improved, thereby improving the classification accuracy while reducing the number of model parameters. In addition, a cross-attention-based feature fusion module was designed, which enriches the feature representation by cross-computing with the depth modality to obtain the attention maps of RGB and infrared modalities. In the present study, the proposed method was tested in several datasets and had good accuracy and only 0.0892 M parameters, indicating that the method is completely suitable for deployment into resource-constrained mobile or embedded devices. The code is available at https://github.com/HeDan-11/LFAS-CFMMF. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
RGB color model
Feature fusion
Education and training
Data modeling
Facial recognition systems
Infrared imaging
Image fusion