In this paper, a temporal attention quality aware network (TA-QAN) is proposed. By extracting the temporal information between frames, all the frame sequences in the complementary information are effectively aggregated, and the influence of the quality image region is significantly reduced. The temporal attention quality aware network is used to extract temporal information between frames through temporal convolution. The comparison experiment with other feature extraction methods shows that the proposed method has the best performance in PRID 2011 and iLIDS-VID2014 data sets. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Video
Performance modeling
Data modeling
Feature extraction
Image quality
Convolution
Facial recognition systems