22 March 2024 Lightweight face anti-spoofing method based on cross-fused multi-modal features
Xiping He, Yi Li, Dan He, Rui Yuan, Ling Huang
Author Affiliations +
Abstract

With the improvement in the quality of spoofed faces, there are significant challenges in terms of the security of face recognition systems. At present, methods based on multi-modality can further protect systems from spoofing. However, such approaches generally have a considerably large amount of parameters and calculations. To overcome such problems, a lightweight face anti-spoofing method based on cross-fused multi-modal features was proposed. Firstly, the lightweight network ShuffleNetV2 is improved, thereby improving the classification accuracy while reducing the number of model parameters. In addition, a cross-attention-based feature fusion module was designed, which enriches the feature representation by cross-computing with the depth modality to obtain the attention maps of RGB and infrared modalities. In the present study, the proposed method was tested in several datasets and had good accuracy and only 0.0892 M parameters, indicating that the method is completely suitable for deployment into resource-constrained mobile or embedded devices. The code is available at https://github.com/HeDan-11/LFAS-CFMMF.

© 2024 SPIE and IS&T
Xiping He, Yi Li, Dan He, Rui Yuan, and Ling Huang "Lightweight face anti-spoofing method based on cross-fused multi-modal features," Journal of Electronic Imaging 33(2), 023033 (22 March 2024). https://doi.org/10.1117/1.JEI.33.2.023033
Received: 19 October 2023; Accepted: 4 March 2024; Published: 22 March 2024
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
RGB color model

Feature fusion

Education and training

Data modeling

Facial recognition systems

Infrared imaging

Image fusion

Back to Top