| |||||||
ShanghaiTech University Knowledge Management System
Research on LLM Acceleration Using the High-Performance RISC-V Processor "Xiangshan" (Nanhu Version) Based on the Open-Source Matrix Instruction Set Extension (Vector Dot Product) | |
2024-09-01 | |
状态 | 已发表 |
摘要 | Considering the high-performance and low-power requirements of edge AI, this study designs a specialized instruction set processor for edge AI based on the RISC-V instruction set architecture, addressing practical issues in digital signal processing for edge devices. This design enhances the execution efficiency of edge AI and reduces its energy consumption with limited hardware overhead, meeting the demands for efficient large language model (LLM) inference computation in edge AI applications. The main contributions of this paper are as follows: For the characteristics of large language models, custom instructions were extended based on the RISC-V instruction set to perform vector dot product calculations, accelerating the computation of large language models on dedicated vector dot product acceleration hardware. Based on the open-source high-performance RISC-V processor core XiangShan Nanhu architecture, the vector dot product specialized instruction set processor Nanhu-vdot was implemented, which adds vector dot product calculation units and pipeline processing logic on top of the XiangShan Nanhu.The Nanhu-vdot underwent FPGA hardware testing, achieving over four times the speed of scalar methods in vector dot product computation. Using a hardware-software co-design approach for second-generation Generative Pre-Trained Transformer (GPT-2) model inference, the speed improved by approximately 30% compared to pure software implementation with almost no additional consumption of hardware resources and power consumption. |
关键词 | Instruction set extension vector dot product software and hardware collaboration Large Language Model Inference |
DOI | arXiv:2409.00661 |
相关网址 | 查看原文 |
出处 | Arxiv |
WOS记录号 | PPRN:91713034 |
WOS类目 | Computer Science, Hardware& Architecture |
文献类型 | 预印本 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/421353 |
专题 | 信息科学与技术学院 |
通讯作者 | Tang, Dan |
作者单位 | 1.Beijing Inst Open Source Chip, Beijing 100080, Peoples R China 2.ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 210210, Peoples R China 3.Zhengzhou Univ, Henan Inst Adv Technol, Zhengzhou 450003, Henan, Peoples R China 4.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China 5.Univ Chinese Acad Sci, Beijing 100049, Peoples R China |
推荐引用方式 GB/T 7714 | Chen, Xu-Hao,Hu, Si-Peng,Liu, Hong-Chao,et al. Research on LLM Acceleration Using the High-Performance RISC-V Processor "Xiangshan" (Nanhu Version) Based on the Open-Source Matrix Instruction Set Extension (Vector Dot Product). 2024. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。