30 December 2015 Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment
Hongjian Zhao, Shixiong Xia, Rui Yao, Qiang Niu, Yong Zhou
Author Affiliations +
Abstract
Concatenating multicamera videos with differing centers of projection into a single panoramic video is a critical technology of many important applications. We propose a real-time video fusion approach to create wide field-of-view video. To provide a fast and accurate video registration method, we propose multistage hashing to find matched feature-point pairs from coarse to fine. In the first stage of multistage hashing, a short compact binary code is learned from all feature points, and then we calculate the Hamming distance between each two points to find the candidate-matched points. In the second stage, a long binary code is obtained by remapping the candidate points for fine matching. To tackle the distortion and scene depth variation of multiview frames in videos, we build hybrid transformation with depth adjustment. The depth compensation between two adjacent frames extends into multiple frames in an iterative model for successive video frames. We conduct several experiments with different dynamic scenes and camera numbers to verify the performance of the proposed real-time video fusion approach.
© 2015 SPIE and IS&T 1017-9909/2015/$25.00 © 2015 SPIE and IS&T
Hongjian Zhao, Shixiong Xia, Rui Yao, Qiang Niu, and Yong Zhou "Real-time video fusion based on multistage hashing and hybrid transformation with depth adjustment," Journal of Electronic Imaging 24(6), 063023 (30 December 2015). https://doi.org/10.1117/1.JEI.24.6.063023
Published: 30 December 2015
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video

Cameras

Image fusion

Binary data

Video surveillance

Panoramic photography

Distortion

Back to Top