Share Email Print
cover

Proceedings Paper

Deriving video content type from HEVC bitstream semantics
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

As network service providers seek to improve customer satisfaction and retention levels, they are increasingly moving from traditional quality of service (QoS) driven delivery models to customer-centred quality of experience (QoE) delivery models. QoS models only consider metrics derived from the network however, QoE models also consider metrics derived from within the video sequence itself. Various spatial and temporal characteristics of a video sequence have been proposed, both individually and in combination, to derive methods of classifying video content either on a continuous scale or as a set of discrete classes. QoE models can be divided into three broad categories, full reference, reduced reference and no-reference models. Due to the need to have the original video available at the client for comparison, full reference metrics are of limited practical value in adaptive real-time video applications. Reduced reference metrics often require metadata to be transmitted with the bitstream, while no-reference metrics typically operate in the decompressed domain at the client side and require significant processing to extract spatial and temporal features. This paper proposes a heuristic, no-reference approach to video content classification which is specific to HEVC encoded bitstreams. The HEVC encoder already makes use of spatial characteristics to determine partitioning of coding units and temporal characteristics to determine the splitting of prediction units. We derive a function which approximates the spatio-temporal characteristics of the video sequence by using the weighted averages of the depth at which the coding unit quadtree is split and the prediction mode decision made by the encoder to estimate spatial and temporal characteristics respectively. Since the video content type of a sequence is determined by using high level information parsed from the video stream, spatio-temporal characteristics are identified without the need for full decoding and can be used in a timely manner to aid decision making in QoE oriented adaptive real time streaming.

Paper Details

Date Published: 15 May 2014
PDF: 13 pages
Proc. SPIE 9139, Real-Time Image and Video Processing 2014, 913902 (15 May 2014); doi: 10.1117/12.2051757
Show Author Affiliations
James Nightingale, Univ. of the West of Scotland (United Kingdom)
Qi Wang, Univ. of the West of Scotland (United Kingdom)
Christos Grecos, Univ. of the West of Scotland (United Kingdom)
Sergio R. Goma, Qualcomm Inc. (United States)


Published in SPIE Proceedings Vol. 9139:
Real-Time Image and Video Processing 2014
Nasser Kehtarnavaz; Matthias F. Carlsohn, Editor(s)

© SPIE. Terms of Use
Back to Top