Hardware-friendly Deep Learning for Edge Computing

TitleHardware-friendly Deep Learning for Edge Computing
Publication TypeThesis
Year of Publication2021
AuthorsLi, YI
Academic DepartmentSchool of Computing and Augmented Intelligence
DegreeDoctor of Philosophy in Computer Engineering
Date Published05/2021
UniversityArizona State University
CityTempe
Keywords (or New Research Field)psclab
Abstract

The Internet-of-Things (IoT) boosts the vast amount of streaming data. However, even considering the growth of the cloud computing infrastructure, IoT devices will generate two orders of magnitude more than the capacity that centralized data center servers can process or store. This trend inevitability calls for the need for offloading IoT data processing to a decentralized edge computing infrastructure. On the other hand, deep-learning-based applications gain great progress by taking advantage of heavy centralized computing resources for training large models to fit increasingly complicated tasks. Even though large-scale deep learning models perform well in terms of accuracy, their high computational complexity makes it impossible to offload them onto edge devices for real-time inference and timely response.

To enable timely IoT services on edge devices, this dissertation addresses the challenge from two perspectives. On the hardware side, a new FPGA-based framework for binary neural network and an ASIC accelerator for natural scene text interpretation are proposed, with the awareness of the computing resources and power constraint on edge. On the algorithm side, this work presents both the methodology of building more compact models and finding better computation-accuracy trade-off for existing models.