ShanghaiTech University Knowledge Management System
Drift: Leveraging Distribution-based Dynamic Precision Quantization for Efficient Deep Neural Network Acceleration | |
2024-11-07 | |
会议录名称 | PROCEEDINGS - DESIGN AUTOMATION CONFERENCE |
ISSN | 0738-100X |
发表状态 | 已发表 |
DOI | 10.1145/3649329.3655986 |
摘要 | Quantization is one of the most hardware-efficient ways to reduce inference costs for deep neural network (DNN) models. Nevertheless, with the continuous increase of DNN model sizes (240× in two years) and the emergence of large language models, existing static quantization methods fail to utilize the sparsity and redundancy of models sufficiently. Motivated by the pervasive dynamism in data tensors across DNN models, we propose a dynamic precision quantization algorithm to further reduce computational costs beyond statically quantized DNN models. Furthermore, we find that existing precision-flexible accelerators cannot support the DNN models with dynamic precision. To this end, we design a novel accelerator, Drift, and achieve online scheduling to efficiently support dynamic precision execution. We conduct experiments with various DNN models, including CNN-based and Transformer-based models. Evaluation results show that Drift achieves 2.85× speedup and 3.12× energy saving compared to existing precision-flexible accelerators with statically quantized models. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM. |
会议录编者/会议主办者 | ACM Special Interest Group on Design Automation (SIGDA) ; ACM Special Interest Group on Embedded Systems (SIGBED) ; IEEE-CEDA |
关键词 | Neural network models Computational costs Dynamic precision Evaluation results Language model Model size Neural network model Neural-networks Online scheduling Quantisation Quantization algorithms |
会议名称 | 61st ACM/IEEE Design Automation Conference, DAC 2024 |
会议地点 | San Francisco, CA, United states |
会议日期 | June 23, 2024 - June 27, 2024 |
收录类别 | EI |
语种 | 英语 |
出版者 | Institute of Electrical and Electronics Engineers Inc. |
EI入藏号 | 20245017501408 |
EI主题词 | Deep neural networks |
EI分类号 | 1101 ; 1101.2.1 |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/461537 |
专题 | 信息科学与技术学院 信息科学与技术学院_硕士生 |
通讯作者 | Wang, Ying |
作者单位 | 1.Cics, Institute of Computing Technology, Chinese Academy of Sciences, China; 2.Sklp, Institute of Computing Technology, Chinese Academy of Sciences, China; 3.University of Chinese Academy of Sciences, China; 4.Zhongguancun National Laboratory, China; 5.School of Information Science and Technology, ShanghaiTech University, China; 6.Shanghai Innovation Center for Processor Technologies, China |
推荐引用方式 GB/T 7714 | Liu, Lian,Xu, Zhaohui,He, Yintao,et al. Drift: Leveraging Distribution-based Dynamic Precision Quantization for Efficient Deep Neural Network Acceleration[C]//ACM Special Interest Group on Design Automation (SIGDA), ACM Special Interest Group on Embedded Systems (SIGBED), IEEE-CEDA:Institute of Electrical and Electronics Engineers Inc.,2024. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。