Share Email Print
cover

Proceedings Paper • new

No-reference video quality assessment based on perceptual features extracted from multi-directional video spatiotemporal slices images
Author(s): Peng Yan; Xuanqin Mou
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

As video applications become more popular, no-reference video quality assessment (NR-VQA) has become a focus of research. In many existing NR-VQA methods, perceptual feature extraction is often the key to success. Therefore, we design methods to extract the perceptual features that contain a wider range of spatiotemporal information from multidirectional video spatiotemporal slices (STS) images (the images generated by cutting video data parallel to temporal dimension in multiple directions) and use support vector machine (SVM) to perform a successful NR video quality evaluation in this paper. In the proposed NR-VQA design, we first extracted the multi-directional video STS images to obtain as much as possible the overall video motion representation. Secondly, the perceptual features of multi-directional video STS images such as the moments of feature maps, joint distribution features from the gradient magnitude and filtering response of Laplacian of Gaussian, and motion energy characteristics were extracted to characterize the motion statistics of videos. Finally, the extracted perceptual features were fed in SVM or multilayer perceptron (MLP) to perform training and testing. And the experimental results show that the proposed method has achieved the state-of-theart quality prediction performance on the largest existing annotated video database.

Paper Details

Date Published: 8 November 2018
PDF: 10 pages
Proc. SPIE 10817, Optoelectronic Imaging and Multimedia Technology V, 108171D (8 November 2018); doi: 10.1117/12.2503149
Show Author Affiliations
Peng Yan, Xi'an Jiaotong Univ. (China)
Xuanqin Mou, Xi'an Jiaotong Univ. (China)


Published in SPIE Proceedings Vol. 10817:
Optoelectronic Imaging and Multimedia Technology V
Qionghai Dai; Tsutomu Shimura, Editor(s)

© SPIE. Terms of Use
Back to Top