Video frame interpolation is used to generate intermediate frames by estimating the movement of pixels between the input frames. However, problems of blurring, object occlusion, and sudden brightness changes occur in naturally obtained video frames. We propose a context-based video frame interpolation method via depthwise over-parameterized convolution. First, the proposed network obtains the context graphs of the input frames. Subsequently, an adaptive collaboration of flows is adopted to warp the input frames and the context graphs. Then, the frame synthesis network is used to fuse the warped input frames and context graphs to obtain a preliminary estimate of the interpolated frame. Finally, a post-processing module is employed to refine the result. Experimental results on several datasets demonstrate that the proposed method performs qualitatively and quantitatively better than state-of-the-art methods. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Video
Convolution
Visualization
Optical flow
Motion estimation
Motion models
Network architectures