KEYWORDS: Video, Feature extraction, Feature fusion, Detection and tracking algorithms, Education and training, Cross validation, Data processing, Data modeling
Video facial micro expression recognition is difficult to extract features due to its short duration and small action amplitude. In order to better combine temporal and spatial information of video, the whole model is divided into local attention module, global attention module and temporal module. First, the local attention module intercepts the key areas and sends them to the network with channel attention after processing; Then the global attention module sends the data into the network with spatial attention after random erasure avoiding key areas; Finally, the temporal module sends the micro expression occurrence frame to the network with temporal shift module and spatial attention after processing; Finally, the classification results are obtained through three full connection layers after feature fusion. The experiment is tested based on CASMEⅡ dataset,After five-fold Cross Validation, the average accuracy rate is 76.15, the unweighted F1 value is 0.691.Compared with the mainstream algorithm, this method has improvement.
Single Shot Multibox Detector (SSD) uses multi-scale feature maps to detect and recognize objects, which greatly improves the performance of single-stage approaches, but it is still not conducive to detecting small objects. Researchers focus on enhancing features on the feature pyramid. However, many networks simply merge several feature maps, ignoring fully aggregating the semantics among different scale features. On this basis, this paper proposes an efficient semantic aggregation module and a lightweight feature combination module, which can significantly improve the detection accuracy based on SSD. In the semantic aggregation module, the feature maps of different sizes are adjusted and integrated through different channels to obtain the enhanced features with rich semantic information, which can improve the discrimination and expression ability of the features. In the feature combination module, the detector can fully utilize the multi-scale convolution layers in the feature pyramid to produce more descriptive and representative enhanced features with rich semantics. Our proposed network with 512×512 input size can respectively achieve 82.6% mAP and 81.3% mAP (mean Average Precision) in VOC2007 test and VOC2012 test datasets. Some experiments and ablation studies show that this method is superior to many advanced detectors in accuracy and speed.
Single-image super-resolution (SISR) studies have achieved superior improvement with the development of convolution neural networks. However, most methods sink into the high computation cost. To tackle this issue, we propose an involution-based lightweight method with contrastive learning for efficient SISR. Unlike the original involution, we set the group number of involution operations to the input feature channels. This setting guarantees the spatial- and channel-specific peculiarity. Moreover, our implemented involution not only learns the weight but also the bias for convolution. Simultaneously, we rethink the kernel generation functions of involution. Instead, we utilize Sigmoid with reparameterized convolution. We additionally apply residual path to involution operation. Furthermore, contrastive learning is adopted during training to learn universal features. Compared with state-of-the-art efficient SISR methods, our proposed methods achieve the best performance with similar or fewer parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.