Most crowd counting methods rely on integrating density maps for prediction, but they encounter performance degradation in the face of density variations. Existing methods primarily employ a multi-scale architecture to mitigate this issue. However, few approaches concurrently consider both scale and timing information. We propose a scale-divided architecture for video crowd counting. Initially, density maps of different Gaussian scales are employed to retain information at various scales, accommodating scale changes in images. Subsequently, we observe that the spatiotemporal network places greater emphasis on individual locations, prompting us to aggregate temporal information at a specific scale. This design enables the temporal model to acquire more spatial information and alleviate occlusion issues. Experimental results on various public datasets demonstrate the superior performance of our proposed method. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Video
Education and training
Convolution
Feature extraction
Ablation
Image processing
Visualization