KEYWORDS: Performance modeling, Feature extraction, Data modeling, Mining, Visualization, Cameras, Visual process modeling, Light sources and illumination, High power microwaves, Convolutional neural networks
The purpose of person re-identification (ReID) is to retrieve a person of interest from a set of images taken by multiple cameras. In the person ReID task, the use of global-local features and attention mechanisms to construct robust pedestrian features has been shown to be effective. However, in these methods, the models only focus on extracting pedestrian features with strong discrimination while ignoring the potential features, which are equally valuable and can play an important role in person ReID tasks. To extract these potential features, which are hidden by salient features, we propose a person ReID network based on weight-driven saliency hierarchical utilization. Three improvements are exploited to extract more comprehensive and diverse information of pedestrians. First, we use the non-local module to enhance the feature extraction ability of the model. To mine potential features, we use a saliency enhancement and suppression operation in the non-local module. Second, we employ a new multi-stage global feature fusion module to increase the diversity of features. Third, we use the multi-branch attention module to extract more fine-grained part features to improve the model performance. Extensive experiments show that our model achieves excellent performance on the Market-1501, duke multi-tracking multi-camera re-identification, and multi-scene multi-time person ReID datasets.
The purpose of person re-identification (Re-ID) is to retrieve a person of interest from a set of images taken by multiple cameras. In some current work, simple global features and local features do not allow the model to achieve excellent performance. In this paper, we propose an end-to-end person re-identification network that integrates multi-granularity pedestrian features. Our model contains multiple branching feature extraction modules, specifically, two global feature extraction modules, two auxiliary modules and two attention modules. To enhance the feature extraction capability of the model, we embed an improved parameter-free attention module in the backbone network, which significantly improves the performance. Our comprehensive experiments on the mainstream evaluation datasets of Market-1501, DukeMTMCreid show that our method achieves a more advanced performance that outperforms most existing methods. As an example, on the Market-1501 dataset, with the help of re-ranking(RK) strategy, we get the result of rank-1/mAP=95.8%/94.0% which exceeds most current methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.