The rapid development of the Internet has made people increasingly dependent on networks for information transmission. Visual media such as digital images and videos have gradually become two of the most important forms of information exchange because they can be disseminated very quickly. However, images and videos often contain private data, such as corporate secrets and personal identity information and the harm caused by leakage of such data cannot be underestimated. Therefore, the security of images and videos has attracted widespread attention from the public and related researchers. At present, the processing of images and videos tends to include the following steps: sampling, compression, encryption, transmission, decryption, decompression, reconstruction, and intelligent processing. During these steps, images and videos are encrypted and decrypted by the sender and the receiver respectively to avoid potential leakage of private data by interception during online transmission. This processing mode is mainly aimed at preventing information-leakage problems during transmission, but it ignores the risks caused by the intelligent application of images and videos after their reconstruction. For the contradiction between existing data processing methods and actual social needs, a novel visual roughened sensing is proposed for typical intelligent applications such as private human pose recognition.
Pedestrian occlusion, variations in the cross-view angle, and the appearances of pedestrians significantly hinder person reidentification (ReID). A dual attention and part drop network (DAPD-Net) for person ReID is proposed. The dual attention module enables the deep neural network to focus on the pedestrian in the foreground of a given image and weakens background perturbance. It can speed up learning and improve network performance. Feature maps in the part drop branch that we proposed are divided into multiple parts, one of them is randomly dropped, and the remainder are learned to obtain a feature that is robust against occlusion. Through part drop training, the antiocclusion ability of the network is effectively improved. The middle-layer branch is used, which help our network to learn mid-level semantic feature and promote capability of the system. These innovative modules can help deep neural network to extract discriminative feature representations. We conduct extensive experiments on multiple public datasets of person ReID. The results show that our method outperforms many state-of-the-art methods.
We propose a robust local L2,1 tracker based on red-green-blue (RGB) color channel fusion. In this tracker, the object is first divided into some patches with overlap region, and then the local sparse optimization from RGB color channels in each patch is solved via the L2,1 mixed-norm regularization, which can realize the fusion of multicolor channel information. In the calculation of candidate object confidence, the confidences from RGB color channels are fused to obtain the total confidence of candidate object, this can accurately select the best candidate object. In the update module of template set and dictionary database, we design an adaptive update mechanism. The template and dictionary to delete are determined by sorting the cosine similarities between the tracking result and templates while the update of dictionary database is completed by replacing the old dictionaries with the reconstruction results of all the patches corresponding to the tracking result. This update method can effectively adapt to the appearance change of the object, and it can alleviate the tracking drift. Both qualitative and quantitative evaluations on challenging video sequences demonstrate that the tracker proposed is reliable and effective. It performs favorably against several state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.