Loading [MathJax]/extensions/MathMenu.js
Neural Inter-Frame Compression for Video Coding | IEEE Conference Publication | IEEE Xplore

Neural Inter-Frame Compression for Video Coding


Abstract:

While there are many deep learning based approaches for single image compression, the field of end-to-end learned video coding has remained much less explored. Therefore,...Show More

Abstract:

While there are many deep learning based approaches for single image compression, the field of end-to-end learned video coding has remained much less explored. Therefore, in this work we present an inter-frame compression approach for neural video coding that can seamlessly build up on different existing neural image codecs. Our end-to-end solution performs temporal prediction by optical flow based motion compensation in pixel space. The key insight is that we can increase both decoding efficiency and reconstruction quality by encoding the required information into a latent representation that directly decodes into motion and blending coefficients. In order to account for remaining prediction errors, residual information between the original image and the interpolated frame is needed. We propose to compute residuals directly in latent space instead of in pixel space as this allows to reuse the same image compression network for both key frames and intermediate frames. Our extended evaluation on different datasets and resolutions shows that the rate-distortion performance of our approach is competitive with existing state-of-the-art codecs.
Date of Conference: 27 October 2019 - 02 November 2019
Date Added to IEEE Xplore: 27 February 2020
ISBN Information:

ISSN Information:

Conference Location: Seoul, Korea (South)

1. Introduction

In 2017 video content already represented 75% of the total internet traffic and it is projected to reach 82% by 2022 [7]. This is due to an expected increase in subscribers to streaming services, higher resolution, frame rate, and dynamic range. As a result, video compression techniques are challenged in handling this data efficiently and with low loss of visual quality.

Contact IEEE to Subscribe

References

References is not available for this document.