Share Email Print
cover

Proceedings Paper • new

Propagation of quantization error in performing intra-prediction with deep learning
Author(s): Raz Birman; Yoram Segal; Avishay David-Malka; Ofer Hadar; Ron Shmueli
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

Standard video compression algorithms use multiple “Modes”, which are various linear combinations of pixels for prediction of their neighbors within image Macro-Blocks (MBs). In this research, we are using Deep Neural Networks (DNN) with supervised learning to predict block pixels. Using DNNs and employing intra-block pixel values’ calculations that penetrate into the block, we manage to obtain improved predictions that yield up to 200% reduction of residual block errors. However, using intra-block pixels for predictions brings upon interesting tradeoffs between prediction errors and quantization errors. We explore and explain these tradeoffs for two different DNN types. We further discovered that it is possible to achieve a larger dynamic range of quantization parameter (Qp) and thus reach lower bit-rates than standard modes, which already saturate at these Qp levels. We explore this phenomenon and explain its reasoning.

Paper Details

Date Published: 6 September 2019
PDF: 7 pages
Proc. SPIE 11137, Applications of Digital Image Processing XLII, 111370Z (6 September 2019); doi: 10.1117/12.2530341
Show Author Affiliations
Raz Birman, Ben-Gurion Univ. of the Negev (Israel)
Yoram Segal, Ben-Gurion Univ. of the Negev (Israel)
Avishay David-Malka, Ben-Gurion Univ. of the Negev (Israel)
Ofer Hadar, Ben-Gurion Univ. of the Negev (Israel)
Ron Shmueli, Afeka College of Engineering (Israel)


Published in SPIE Proceedings Vol. 11137:
Applications of Digital Image Processing XLII
Andrew G. Tescher; Touradj Ebrahimi, Editor(s)

© SPIE. Terms of Use
Back to Top