CSVideoNet: A Real-time End-to-end Learning Framework for High-frame-rate Video Compressive Sensing

TitleCSVideoNet: A Real-time End-to-end Learning Framework for High-frame-rate Video Compressive Sensing
Publication TypeConference Proceedings
Year of Publication2018
AuthorsXu, K, Ren, F
Conference NameIEEE Winter Conference on Applications of Computer Vision (WACV)
Date Published03/2018
Keywords (or New Research Field)psclab
Abstract

This paper addresses the real-time encoding-decoding problem for high-frame-rate video compressive sensing (CS). Unlike prior works that perform reconstruction using iterative optimization-based approaches, we propose a non-iterative model, named “CSVideoNet”. CSVideoNet directly learns the inverse mapping of CS and reconstructs the original input in a single forward propagation. To overcome the limitations of existing CS cameras, we propose a multi-rate CNN and a synthesizing RNN to improve the trade-o‚ between compression ratio (CR) and spatial-temporal resolution of the reconstructed videos. The experiment results demonstrate that CSVideoNet significantly outperforms the state-of-the-art approaches. With no pre/post-processing, we achieve 25dB PSNR recovery quality at 100x CR, with a frame rate of 125 fps on a Titan X GPU. Due to the feedforward and high-data-concurrency natures of CSVideoNet, it can take advantage of GPU acceleration to achieve three orders of magnitude speed-up over conventional iterative-based approaches. We share the source code at https://github.com/PSCLab-ASU/CSVideoNet.

DOI10.1109/WACV.2018.00187
File Attachment: