As a uncommon biometric modality, human gait recognition has a great advantage of identify people at a
distance without high resolution images. It has attracted much attention in recent years, especially in the
fields of computer vision and remote sensing. In this paper, we propose a human gait recognition framework
that consists of a reliable background subtraction method followed by the pyramid of Histogram of Gradient
(pHOG) feature extraction on the silhouette image, and a Hidden Markov Model (HMM) based classifier.
Through background subtraction, the silhouette of human gait in each frame is extracted and normalized from
the raw video sequence. After removing the shadow and noise in each region of interest (ROI), pHOG feature
is computed on the silhouettes images. Then the pHOG features of each gait class will be used to train a
corresponding HMM. In the test stage, pHOG feature will be extracted from each test sequence and used to
calculate the posterior probability toward each trained HMM model. Experimental results on the CASIA Gait
Dataset B1 demonstrate that with our proposed method can achieve very competitive recognition rate.
Multiple modalities sensor fusion has been widely employed in various surveillance and military applications. A
variety of image fusion techniques including PCA, wavelet, curvelet and HSV has been proposed in recent years
to improve human visual perception for object detection. One of the main challenges for visible and infrared
image fusion is to automatically determine an optimal fusion strategy for different input scenes along with an
acceptable computational cost.
This paper, we propose a fast and adaptive feature selection based image fusion method to obtain high a
contrast image from visible and infrared sensors for targets detection. At first, fuzzy c-means clustering is applied
on the infrared image to highlight possible hotspot regions, which will be considered as potential targets' locations.
After that, the region surrounding the target area is segmented as the background regions. Then image fusion
is locally applied on the selected target and background regions by computing different linear combination of
color components from registered visible and infrared images. After obtaining different fused images, histogram
distributions are computed on these local fusion images as the fusion feature set. The variance ratio which
is based on Linear Discriminative Analysis (LDA) measure is employed to sort the feature set and the most
discriminative one is selected for the whole image fusion. As the feature selection is performed over time, the
process will dynamically determine the most suitable feature for the image fusion in different scenes. Experiment
is conducted on the OSU Color-Thermal database, and TNO Human Factor dataset. The fusion results indicate
that our proposed method achieved a competitive performance compared with other fusion algorithms at a
relatively low computational cost.
Detection and tracking of a varying number of people is very essential in surveillance sensor systems. In the
real applications, due to various human appearance and confusors, as well as various environmental conditions,
multiple targets detection and tracking become even more challenging. In this paper, we proposed a new
framework integrating a Multiple-Stage Histogram of Oriented Gradients (HOG) based human detector and the
Particle Filter Gaussian Process Dynamical Model (PFGPDM) for multiple targets detection and tracking. The
Multiple-Stage HOG human detector takes advantage from both the HOG feature set and the human motion
cues. The detector enables the framework detecting new targets entering the scene as well as providing potential
hypotheses for particle sampling in the PFGPDM. After processing the detection results, the motion of each
new target is calculated and projected to the low dimensional latent space of the GPDM to find the most similar
trained motion trajectory. In addition, the particle propagation of existing targets integrates both the motion
trajectory prediction in the latent space of GPDM and the hypotheses detected by the HOG human detector. Experimental tests are conducted on the IDIAP data set. The test results demonstrate that the proposed approach can robustly detect and track a varying number of targets with reasonable run-time overhead and performance.
In this paper we present a new particle filter based multi-target tracking method incorporating Gaussian Process
Dynamical Model (GPDM) to improve robustness in multi-target tracking on complex motion patterns. With
the Particle Filter Gaussian Process Dynamical Model (PFGPDM), a high-dimensional training target trajectory
dataset of the observation space is projected to a low-dimensional latent space through Probabilistic Principal
Component Analysis (PPCA), which will then be used to classify test object trajectories, predict the next
motion state, and provide Gaussian process dynamical samples for the particle filter. In addition, histogram-
Bhartacharyya and GMM Kullback-Leibler are employed respectively, and compared in the particle filter as
complimentary features to coordinate data used in GPDM. Experimental tests are conducted on the PETS2007
benchmark dataset. The test results demonstrate that the approach can track more than four targets with
reasonable run-time overhead and good performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.