View : 214 Download: 0

Complexity-Aware Layer-Wise Mixed-Precision Schemes With SQNR-Based Fast Analysis

Title
Complexity-Aware Layer-Wise Mixed-Precision Schemes With SQNR-Based Fast Analysis
Authors
Kim, HanaEun, HyunChoi, Jung HwanKim, Ji-Hoon
Ewha Authors
김지훈
SCOPUS Author ID
김지훈scopus
Issue Date
2023
Journal Title
IEEE ACCESS
ISSN
2169-3536JCR Link
Citation
IEEE ACCESS vol. 11, pp. 117800 - 117809
Keywords
HardwareQuantization (signal)Artificial neural networksSensitivityComputational modelingTrainingQ-factorDeep neural network (DNN)mixed-precisionsignal to quantization noise ratio (SQNR)complexity-awareness
Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Indexed
SCIE; SCOPUS WOS
Document Type
Article
Abstract
Recently, deep neural network (DNN) acceleration has been critical for hardware systems from mobile/edge devices to high-performance data centers. Especially, for on-device AI, there have been many studies on hardware numerical precision reduction considering the limited hardware resources of mobile/edge devices. Although layer-wise mixed-precision leads to computational complexity reduction, it is not straightforward to find a well-balanced layer-wise precision scheme since it takes a long time to determine the optimal precision for each layer due to the repetitive experiments and the model accuracy, the fundamental measure of deep learning quality, should be considered as well. In this paper, we propose the layer-wise mixed precision scheme which can significantly reduce the time required to determine the optimal hardware numerical precision with Signal-to-Quantization Noise Ratio (SQNR)-based analysis. In addition, the proposed scheme can take the hardware complexity into consideration in terms of the number of operations (OPs) or weight memory requirement of each layer. The proposed method can be directly applied to inference, meaning that users can utilize well-trained neural network models without the need for additional training or hardware units. With the proposed SQNR-based analysis, for SSDlite and YOLOv2 networks, the analysis time required for layer-wise precision determination is reduced by more than 95% compared to conventional mean Average Precision(mAP)-based analysis. Also, with the proposed complexity-aware schemes, the number of OPs and weight memory requirement can be reduced by up to 86.14% and 78.03%, respectively, for SSDlite, and by up to 51.93% and 50.62%, respectively, for YOLOv2, with negligible model accuracy degradation.
DOI
10.1109/ACCESS.2023.3325402
Appears in Collections:
공과대학 > 전자전기공학전공 > Journal papers
Files in This Item:
There are no files associated with this item.
Export
RIS (EndNote)
XLS (Excel)
XML


qrcode

BROWSE