|Title||Hardware-friendly Deep Learning for Edge Computing|
|Year of Publication||2021|
|Academic Department||School of Computing and Augmented Intelligence|
|Degree||Doctor of Philosophy in Computer Engineering|
|University||Arizona State University|
|Keywords (or New Research Field)||psclab|
The Internet-of-Things (IoT) boosts the vast amount of streaming data. However, even considering the growth of the cloud computing infrastructure, IoT devices will generate two orders of magnitude more than the capacity that centralized data center servers can process or store. This trend inevitability calls for the need for offloading IoT data processing to a decentralized edge computing infrastructure. On the other hand, deep-learning-based applications gain great progress by taking advantage of heavy centralized computing resources for training large models to fit increasingly complicated tasks. Even though large-scale deep learning models perform well in terms of accuracy, their high computational complexity makes it impossible to offload them onto edge devices for real-time inference and timely response.
To enable timely IoT services on edge devices, this dissertation addresses the challenge from two perspectives. On the hardware side, a new FPGA-based framework for binary neural network and an ASIC accelerator for natural scene text interpretation are proposed, with the awareness of the computing resources and power constraint on edge. On the algorithm side, this work presents both the methodology of building more compact models and finding better computation-accuracy trade-off for existing models.
Sign In / Sign Out
Navigation for Entire University
- ASU Home
- My ASU
- Colleges and Schools
- Map and Locations