Share Email Print
cover

Proceedings Paper

Spatial and temporal models for texture-based video coding
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

In this paper, we investigate spatial and temporal models for texture analysis and synthesis. The goal is to use these models to increase the coding efficiency for video sequences containing textures. The models are used to segment texture regions in a frame at the encoder and synthesize the textures at the decoder. These methods can be incorporated into a conventional video coder (e.g. H.264) where the regions to be modeled by the textures are not coded in a usual manner but texture model parameters are sent to the decoder as side information. We showed that this approach can reduce the data rate by as much as 15%.

Paper Details

Date Published: 29 January 2007
PDF: 10 pages
Proc. SPIE 6508, Visual Communications and Image Processing 2007, 650806 (29 January 2007); doi: 10.1117/12.705068
Show Author Affiliations
Fengqing Zhu, Purdue Univ. (United States)
Ka Ki Ng, Purdue Univ. (United States)
Golnaz Abdollahian, Purdue Univ. (United States)
Edward J. Delp, Purdue Univ. (United States)


Published in SPIE Proceedings Vol. 6508:
Visual Communications and Image Processing 2007
Chang Wen Chen; Dan Schonfeld; Jiebo Luo, Editor(s)

© SPIE. Terms of Use
Back to Top