Build a Compact Binary Neural Network through Bit-level Sensitivity and Data Pruning

TitleBuild a Compact Binary Neural Network through Bit-level Sensitivity and Data Pruning
Publication TypeJournal Article
Year of Publication2020
AuthorsLi, YI, Zhang, S, Zhou, X, Ren, F
Journal Neurocomputing
Volume398
Pagination45-54
Date Published07/2020
Keywords (or New Research Field)psclab
Abstract

Due to the high computational complexity and memory storage requirement, it is hard to directly de- ploy a full-precision convolutional neural network (CNN) on embedded devices. The hardware-friendly designs are needed for resource-limited and energy-constrained embedded devices. Emerging solutions are adopted for the neural network compression, e.g., binary/ternary weight network, pruned network and quantized network. Among them, binary neural network (BNN) is believed to be the most hardware- friendly framework due to its small network size and low computational complexity. No existing work has further shrunk the size of BNN. In this work, we explore the redundancy in BNN and build a com- pact BNN (CBNN) based on the bit-level sensitivity analysis and bit-level data pruning. The input data is converted to a high dimensional bit-sliced format. In the post-training stage, we analyze the impact of different bit slices to the accuracy. By pruning the redundant input bit slices and shrinking the network size, we are able to build a more compact BNN. Our result shows that we can further scale down the net- work size of the BNN up to 3.9x with no more than 1% accuracy drop. The actual runtime can be reduced up to 2x and 9.9x compared with the baseline BNN and its full-precision counterpart, respectively.