Share Email Print

Proceedings Paper

Block adaptive CNN/HEVC interframe prediction for video coding
Author(s): Satoru Jimbo; Ji Wang; Yoshiyuki Yashima
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

This paper proposes a block adaptive CNN/HEVC prediction method for bidirectional motion compensated prediction which is one of the key technologies for video coding. We have already proposed a deep convolutional neural network (CNN) model which can predict the target block to be predicted from its spatially co-located blocks existed in the temporally previous and future frames. The CNN model estimates four geometric transformation matrices, and the predicted values are produced by transforming both previous block and future block using them. We have demonstrated that the method is greatly effective for frames with complicated motion. However, the conventional CNN model often did not work well for the frames with large motion. In this paper, we introduce an adaptation method which selects the one of CNN based prediction and HEVC based prediction. Experimental results show that the prediction error using the proposed method can be reduced to 60% to 90% for many kinds of videos with complex motions compared to only HEVC prediction.

Paper Details

Date Published: 22 March 2019
PDF: 6 pages
Proc. SPIE 11049, International Workshop on Advanced Image Technology (IWAIT) 2019, 1104936 (22 March 2019); doi: 10.1117/12.2520214
Show Author Affiliations
Satoru Jimbo, Chiba Institute of Technology (Japan)
Ji Wang, Chiba Institute of Technology (Japan)
Yoshiyuki Yashima, Chiba Institute of Technology (Japan)

Published in SPIE Proceedings Vol. 11049:
International Workshop on Advanced Image Technology (IWAIT) 2019
Qian Kemao; Kazuya Hayase; Phooi Yee Lau; Wen-Nung Lie; Yung-Lyul Lee; Sanun Srisuk; Lu Yu, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?