消息
×
loading..
Research on LLM Acceleration Using the High-Performance RISC-V Processor "Xiangshan" (Nanhu Version) Based on the Open-Source Matrix Instruction Set Extension (Vector Dot Product)
2024-09-01
状态已发表
摘要

Considering the high-performance and low-power requirements of edge AI, this study designs a specialized instruction set processor for edge AI based on the RISC-V instruction set architecture, addressing practical issues in digital signal processing for edge devices. This design enhances the execution efficiency of edge AI and reduces its energy consumption with limited hardware overhead, meeting the demands for efficient large language model (LLM) inference computation in edge AI applications. The main contributions of this paper are as follows: For the characteristics of large language models, custom instructions were extended based on the RISC-V instruction set to perform vector dot product calculations, accelerating the computation of large language models on dedicated vector dot product acceleration hardware. Based on the open-source high-performance RISC-V processor core XiangShan Nanhu architecture, the vector dot product specialized instruction set processor Nanhu-vdot was implemented, which adds vector dot product calculation units and pipeline processing logic on top of the XiangShan Nanhu.The Nanhu-vdot underwent FPGA hardware testing, achieving over four times the speed of scalar methods in vector dot product computation. Using a hardware-software co-design approach for second-generation Generative Pre-Trained Transformer (GPT-2) model inference, the speed improved by approximately 30% compared to pure software implementation with almost no additional consumption of hardware resources and power consumption.

关键词Instruction set extension vector dot product software and hardware collaboration Large Language Model Inference
DOIarXiv:2409.00661
相关网址查看原文
出处Arxiv
WOS记录号PPRN:91713034
WOS类目Computer Science, Hardware& Architecture
文献类型预印本
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/421353
专题信息科学与技术学院
通讯作者Tang, Dan
作者单位
1.Beijing Inst Open Source Chip, Beijing 100080, Peoples R China
2.ShanghaiTech Univ, Sch Informat Sci & Technol, Shanghai 210210, Peoples R China
3.Zhengzhou Univ, Henan Inst Adv Technol, Zhengzhou 450003, Henan, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing 100190, Peoples R China
5.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
推荐引用方式
GB/T 7714
Chen, Xu-Hao,Hu, Si-Peng,Liu, Hong-Chao,et al. Research on LLM Acceleration Using the High-Performance RISC-V Processor "Xiangshan" (Nanhu Version) Based on the Open-Source Matrix Instruction Set Extension (Vector Dot Product). 2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Chen, Xu-Hao]的文章
[Hu, Si-Peng]的文章
[Liu, Hong-Chao]的文章
百度学术
百度学术中相似的文章
[Chen, Xu-Hao]的文章
[Hu, Si-Peng]的文章
[Liu, Hong-Chao]的文章
必应学术
必应学术中相似的文章
[Chen, Xu-Hao]的文章
[Hu, Si-Peng]的文章
[Liu, Hong-Chao]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。