Paper
21 April 2020 Motion-aware deep video coding network
Author Affiliations +
Abstract
Recent advances in deep learning have achieved great success in fundamental computer vision tasks such as classification, detection and segmentation. Nevertheless, the research effort in deep learning-based video coding is still in its infancy. State-of-the-art deep video coding networks explore temporal correlations by means of frame-level motion estimation and motion compensation, which require high computational complexity due to the frame size, while existing block-level interframe prediction schemes utilize only the co-located blocks in preceding frames, which did not consider object motions. In this work, we propose a novel motion-aware deep video coding network, in which inter-frame correlations are effectively explored via a block-level motion compensation network. Experimental results demonstrate that the proposed inter-frame deep video coding model significantly improves the decoding quality under the same compression ratio.
© (2020) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Rida Khan and Ying Liu "Motion-aware deep video coding network", Proc. SPIE 11395, Big Data II: Learning, Analytics, and Applications, 113950B (21 April 2020); https://doi.org/10.1117/12.2560814
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Video compression

Video

Video coding

Computer programming

Image compression

Video processing

Convolution

RELATED CONTENT

Quality-aware CNN-based in-loop filter for video coding
Proceedings of SPIE (August 01 2021)
Using machine learning for fast intra MB coding in H.264
Proceedings of SPIE (January 29 2007)
VVD: VCR operations for video on demand
Proceedings of SPIE (November 22 1999)
A Motion-Compensated Interframe CODEC
Proceedings of SPIE (May 01 1986)

Back to Top