COMET: Towards Practical W4A4KV4 LLMs Serving
2025
会议录名称INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS - ASPLOS
页码131-146
发表状态已发表
DOI10.1145/3676641.3716252
摘要

Quantization is a widely-used compression technology to reduce the overhead of serving large language models (LLMs) on terminal devices and in cloud data centers. However, prevalent quantization methods, such as 8-bit weight-activation or 4-bit weight-only quantization, achieve limited performance improvements due to poor support for low-precision (e.g., 4-bit) activation. This work, for the first time, realizes practical W4A4KV4 serving for LLMs, fully utilizing the INT4 tensor cores on modern GPUs and reducing the memory bottleneck caused by the KV cache. Specifically, we propose a novel fine-grained mixed-precision quantization algorithm (FMPQ) that compresses most activations into 4-bit with negligible accuracy loss. To support mixed-precision matrix multiplication for W4A4 and W4A8, we develop a highly optimized W4Ax kernel. Our approach introduces a novel mixed-precision data layout to facilitate access and fast dequantization for activation and weight tensors, utilizing the GPU's software pipeline to hide the overhead of data loading and conversion. Additionally, we propose fine-grained streaming multiprocessor (SM) scheduling to achieve load balance across different SMs. We integrate the optimized W4Ax kernel into our inference framework, COMET, and provide efficient management to support popular LLMs such as LLaMA-3-70B. Extensive evaluations demonstrate that, when running LLaMA family models on a single A100-80G-SMX4, COMET achieves a kernel-level speedup of 2.88x over cuBLAS and a 2.02x throughput improvement compared to TensorRT-LLM from an end-to-end framework perspective. © 2025 ACM.

会议录编者/会议主办者ACM SIGARCH ; ACM SIGOPS ; ACM SIGPLAN
关键词Cache memory Compaction Computer graphics equipment Graphics processing unit Integrated circuit design Modeling languages Problem oriented languages Algorithm system co design Bit weight Co designs Language model Large language model serving Large language model quantization Mixed precision Modeling quantizations Quantisation
会议名称30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2025
出版地1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES
会议地点Rotterdam, Netherlands
会议日期March 30, 2025 - April 3, 2025
URL查看原文
收录类别EI ; CPCI-S
语种英语
资助项目National Key R&D Program of China[2023YFB4404400] ; National Natural Science Foundation of China["62222411","62025404"]
WOS记录号WOS:001477004500009
出版者Association for Computing Machinery
EI入藏号20251618244845
EI主题词Tensors
EI分类号714.2 Semiconductor Devices and Integrated Circuits ; 904 Design ; 913.4 Manufacturing ; 1102.3.1 Computer Circuits ; 1103.1 Data Storage, Equipment and Techniques ; 1103.2 Computer Peripheral Equipment ; 1106.1.1 Computer Programming Languages ; 1106.4 Database Systems ; 1201.1 Algebra and Number Theory ; 1201.4 Applied Mathematics ; 1201.14 Geometry and Topology
原始文献类型Conference article (CA)
文献类型会议论文
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/523920
专题信息科学与技术学院_硕士生
通讯作者Wang, Ying
作者单位
1.Institute of Computing Technology, Cas, University of Chinese Academy of Sciences, Beijing, China;
2.North China Electric Power University, Beijing, China;
3.ShanghaiTech University, Shanghai, China;
4.Institute of Computing Technology, Cas, Beijing, China
推荐引用方式
GB/T 7714
Liu, Lian,Cheng, Long,Ren, Haimeng,et al. COMET: Towards Practical W4A4KV4 LLMs Serving[C]//ACM SIGARCH, ACM SIGOPS, ACM SIGPLAN. 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES:Association for Computing Machinery,2025:131-146.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Lian]的文章
[Cheng, Long]的文章
[Ren, Haimeng]的文章
百度学术
百度学术中相似的文章
[Liu, Lian]的文章
[Cheng, Long]的文章
[Ren, Haimeng]的文章
必应学术
必应学术中相似的文章
[Liu, Lian]的文章
[Cheng, Long]的文章
[Ren, Haimeng]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。