Share Email Print
cover

Proceedings Paper

Deep RNNs for video denoising
Author(s): Xinyuan Chen; Li Song; Xiaokang Yang
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Video denoising can be described as the problem of mapping from a specific length of noisy frames to clean one. We propose a deep architecture based on Recurrent Neural Network (RNN) for video denoising. The model learns a patch-based end-to-end mapping between the clean and noisy video sequences. It takes the corrupted video sequences as the input and outputs the clean one. Our deep network, which we refer to as deep Recurrent Neural Networks (deep RNNs or DRNNs), stacks RNN layers where each layer receives the hidden state of the previous layer as input. Experiment shows (i) the recurrent architecture through temporal domain extracts motion information and does favor to video denoising, and (ii) deep architecture have large enough capacity for expressing mapping relation between corrupted videos as input and clean videos as output, furthermore, (iii) the model has generality to learned different mappings from videos corrupted by different types of noise (e.g., Poisson-Gaussian noise). By training on large video databases, we are able to compete with some existing video denoising methods.

Paper Details

Date Published: 28 September 2016
PDF: 10 pages
Proc. SPIE 9971, Applications of Digital Image Processing XXXIX, 99711T (28 September 2016); doi: 10.1117/12.2239260
Show Author Affiliations
Xinyuan Chen, Shanghai Jiao Tong Univ. (China)
Li Song, Shanghai Jiao Tong Univ. (China)
Xiaokang Yang, Shanghai Jiao Tong Univ. (China)


Published in SPIE Proceedings Vol. 9971:
Applications of Digital Image Processing XXXIX
Andrew G. Tescher, Editor(s)

© SPIE. Terms of Use
Back to Top