Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations
Compressive sensing (CS) is a signal sensing technique that senses signals in a compressed manner to save sensing and transmission costs. The sensing in CS is a simple linear mapping of the original signal, and the reconstruction in CS is a complicated inverse problem. Most existing CS reconstruction methods formulate the reconstruction process as an optimization problem and search for the solution iteratively. We refer to them as iterative reconstruction methods. Recently, as neural networks have been proven to be powerful tools for approximation and generation tasks, many neural network models have been proposed to approximate the inverse mapping of CS directly. We refer to the neural network models that directly map CS measurements to reconstruction results as end-to-end data-driven CS reconstruction (EDCSR) frameworks. Compared with the conventional iterative reconstruction methods, the EDCSR frameworks offer significant improvements on both reconstruction speed and accuracy, especially at high compression ratios(CRs), establishing the possibility to perform real-time, high-accuracy image CS reconstruction.
Allowing for a variable CR that can be adaptive to the available battery level, storage space, or communication bandwidth at run time is critical to many resource-constrained CS applications. Unfortunately, a major limitation of the existing EDCSR frameworks is that they can only perform reconstruction at fixed CRs once they are trained. For reconstruction at a different CR, an EDCSR framework must be trained at that CR from scratch, which greatly limits their application in variable CR scenarios.
In this research, we propose to apply the concept of estimate resensing to empower EDCSR frameworks with the adaptability to variable CR. Our approach is structured as a generic CR adapter (CRA) that can be independently applied to the existing EDCSR frameworks with no modification to a given reconstruction model nor enormous rounds of training needed. Given a user-defined lower and upper bounds of the CR, CRA exploits an initial reconstruction network which is trained at the highest CR to generate an initial estimate of reconstruction results with the sensed measurements. Subsequently, CRA approximates full measurements for the main reconstruction network, which is trained at the lowest CR, by complementing the sensed measurements available at any intermediate CR with resensed initial estimate. As such, CRA can enable flexible reconstruction with an arbitrary number of measurements and extend the supported CR to a user-defined lower and upper bounds at a fine granularity. The main advantage of CRA is that it is generic and provides an approximately linear trade-off between the number of measurements and the reconstruction accuracy for all EDCSR frameworks. Our experiments on two public datasets show that CRA provides an average of 13.02 dB and 5.38 dB PSNR improvement across the CRs of 5-30 comparing with a naive zero-padding approach and the prior work, respectively. The proposed CRA approach addresses a big limitation of the existing EDCSR frameworks and makes them suitable for resource-constrained application scenarios.
The contributions of this research are two-fold. First, we propose a simple yet effective approach to empower EDCSR frameworks with adaptability to variable CR, which makes them suitable for resource-constrained CS application scenarios. The proposed CRA significantly improves the reconstruction accuracy of the existing EDCSR frameworks in the context of variable CR compared to a naive zero-padding approach and the prior work. Second, our approach is generic for all EDCSR frameworks and can empower them to deal with a variable CR at run time with no modification to the given network model nor enormous training time needed.
This work is supported by an NSF grant (IIS/CPS-1652038) and an unrestricted gift (CG#1319167) from Cisco Research Center. The NVIDIA TITAN X GPUs used for this research were donated by the NVIDIA Corporation.