One fundamental component of video compression standards is Intra-Prediction. Intra-Prediction takes advantage of redundancy in the information of neighboring pixel values within video frames to predict blocks of pixels from their surrounding pixels and thus allowing to transmit the prediction errors instead of the pixel values themselves. The prediction errors are of smaller values than the pixels themselves, thus allowing to accomplish compression of the video stream. Prevalent standards take advantage of intra-frame pixel value dependencies to perform prediction at the encoder end and transfer only residual errors to the decoder. The standards use multiple “Modes”, which are various linear combinations of pixels for prediction of their neighbors within image Macro-Blocks (MBs). In this research, we have used Deep Neural Networks (DNN) to perform the predictions. Using twelve Fully Connected Networks, we managed to reduce Mean Square Error (MSE) of the predicted error by up to 3 times as compared to standard modes prediction results. This substantial improvement comes at the expense of more extensive computations. However, these extra computations can be significantly mitigated by the use of dedicated Graphical Processing Units (GPUs).
|