Drift: Leveraging Distribution-based Dynamic Precision Quantization for Efficient Deep Neural Network Acceleration
2024-11-07
会议录名称PROCEEDINGS - DESIGN AUTOMATION CONFERENCE
ISSN0738-100X
发表状态已发表
DOI10.1145/3649329.3655986
摘要

Quantization is one of the most hardware-efficient ways to reduce inference costs for deep neural network (DNN) models. Nevertheless, with the continuous increase of DNN model sizes (240× in two years) and the emergence of large language models, existing static quantization methods fail to utilize the sparsity and redundancy of models sufficiently. Motivated by the pervasive dynamism in data tensors across DNN models, we propose a dynamic precision quantization algorithm to further reduce computational costs beyond statically quantized DNN models. Furthermore, we find that existing precision-flexible accelerators cannot support the DNN models with dynamic precision. To this end, we design a novel accelerator, Drift, and achieve online scheduling to efficiently support dynamic precision execution. We conduct experiments with various DNN models, including CNN-based and Transformer-based models. Evaluation results show that Drift achieves 2.85× speedup and 3.12× energy saving compared to existing precision-flexible accelerators with statically quantized models. © 2024 Copyright is held by the owner/author(s). Publication rights licensed to ACM.

会议录编者/会议主办者ACM Special Interest Group on Design Automation (SIGDA) ; ACM Special Interest Group on Embedded Systems (SIGBED) ; IEEE-CEDA
关键词Neural network models Computational costs Dynamic precision Evaluation results Language model Model size Neural network model Neural-networks Online scheduling Quantisation Quantization algorithms
会议名称61st ACM/IEEE Design Automation Conference, DAC 2024
会议地点San Francisco, CA, United states
会议日期June 23, 2024 - June 27, 2024
收录类别EI
语种英语
出版者Institute of Electrical and Electronics Engineers Inc.
EI入藏号20245017501408
EI主题词Deep neural networks
EI分类号1101 ; 1101.2.1
原始文献类型Conference article (CA)
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/461537
专题信息科学与技术学院
信息科学与技术学院_硕士生
通讯作者Wang, Ying
作者单位
1.Cics, Institute of Computing Technology, Chinese Academy of Sciences, China;
2.Sklp, Institute of Computing Technology, Chinese Academy of Sciences, China;
3.University of Chinese Academy of Sciences, China;
4.Zhongguancun National Laboratory, China;
5.School of Information Science and Technology, ShanghaiTech University, China;
6.Shanghai Innovation Center for Processor Technologies, China
推荐引用方式
GB/T 7714
Liu, Lian,Xu, Zhaohui,He, Yintao,et al. Drift: Leveraging Distribution-based Dynamic Precision Quantization for Efficient Deep Neural Network Acceleration[C]//ACM Special Interest Group on Design Automation (SIGDA), ACM Special Interest Group on Embedded Systems (SIGBED), IEEE-CEDA:Institute of Electrical and Electronics Engineers Inc.,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Lian]的文章
[Xu, Zhaohui]的文章
[He, Yintao]的文章
百度学术
百度学术中相似的文章
[Liu, Lian]的文章
[Xu, Zhaohui]的文章
[He, Yintao]的文章
必应学术
必应学术中相似的文章
[Liu, Lian]的文章
[Xu, Zhaohui]的文章
[He, Yintao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。