Ediz Saykol, Muhammet Bastan, Güdükbay Ugur, Özgür Ulusoy
Optical Engineering, Vol. 49, Issue 11, 117203, (November 2010) https://doi.org/10.1117/1.3509270
TOPICS: Video surveillance, Video, Surveillance, Positron emission tomography, Detection and tracking algorithms, Optical engineering, Surveillance systems, Semantic video, Data modeling, Cameras
The huge amount of video data generated by surveillance systems necessitates the use of automatic tools for their efficient analysis, indexing, and retrieval. Automated access to the semantic content of surveillance videos to detect anomalous events is among the basic tasks; however, due to the high variability of the audio-visual features and large size of the video input, it still remains a challenging task, though a considerable amount of research dealing with automated access to video surveillance has appeared in the literature. We propose a keyframe labeling technique, especially for indoor environments, which assigns labels to keyframes extracted by a keyframe detection algorithm, and hence transforms the input video to an event-sequence representation. This representation is used to detect unusual behaviors, such as crossover, deposit, and pickup, with the help of three separate mechanisms based on finite state automata. The keyframes are detected based on a grid-based motion representation of the moving regions, called the motion appearance mask. It has been shown through performance experiments that the keyframe labeling algorithm significantly reduces the storage requirements and yields reasonable event detection and classification performance.