ShanghaiTech University Knowledge Management System
COMET: Towards Practical W4A4KV4 LLMs Serving | |
2025 | |
会议录名称 | INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS - ASPLOS |
页码 | 131-146 |
发表状态 | 已发表 |
DOI | 10.1145/3676641.3716252 |
摘要 | Quantization is a widely-used compression technology to reduce the overhead of serving large language models (LLMs) on terminal devices and in cloud data centers. However, prevalent quantization methods, such as 8-bit weight-activation or 4-bit weight-only quantization, achieve limited performance improvements due to poor support for low-precision (e.g., 4-bit) activation. This work, for the first time, realizes practical W4A4KV4 serving for LLMs, fully utilizing the INT4 tensor cores on modern GPUs and reducing the memory bottleneck caused by the KV cache. Specifically, we propose a novel fine-grained mixed-precision quantization algorithm (FMPQ) that compresses most activations into 4-bit with negligible accuracy loss. To support mixed-precision matrix multiplication for W4A4 and W4A8, we develop a highly optimized W4Ax kernel. Our approach introduces a novel mixed-precision data layout to facilitate access and fast dequantization for activation and weight tensors, utilizing the GPU's software pipeline to hide the overhead of data loading and conversion. Additionally, we propose fine-grained streaming multiprocessor (SM) scheduling to achieve load balance across different SMs. We integrate the optimized W4Ax kernel into our inference framework, COMET, and provide efficient management to support popular LLMs such as LLaMA-3-70B. Extensive evaluations demonstrate that, when running LLaMA family models on a single A100-80G-SMX4, COMET achieves a kernel-level speedup of 2.88x over cuBLAS and a 2.02x throughput improvement compared to TensorRT-LLM from an end-to-end framework perspective. © 2025 ACM. |
会议录编者/会议主办者 | ACM SIGARCH ; ACM SIGOPS ; ACM SIGPLAN |
关键词 | Cache memory Compaction Computer graphics equipment Graphics processing unit Integrated circuit design Modeling languages Problem oriented languages Algorithm system co design Bit weight Co designs Language model Large language model serving Large language model quantization Mixed precision Modeling quantizations Quantisation |
会议名称 | 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2025 |
出版地 | 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES |
会议地点 | Rotterdam, Netherlands |
会议日期 | March 30, 2025 - April 3, 2025 |
URL | 查看原文 |
收录类别 | EI ; CPCI-S |
语种 | 英语 |
资助项目 | National Key R&D Program of China[2023YFB4404400] ; National Natural Science Foundation of China["62222411","62025404"] |
WOS记录号 | WOS:001477004500009 |
出版者 | Association for Computing Machinery |
EI入藏号 | 20251618244845 |
EI主题词 | Tensors |
EI分类号 | 714.2 Semiconductor Devices and Integrated Circuits ; 904 Design ; 913.4 Manufacturing ; 1102.3.1 Computer Circuits ; 1103.1 Data Storage, Equipment and Techniques ; 1103.2 Computer Peripheral Equipment ; 1106.1.1 Computer Programming Languages ; 1106.4 Database Systems ; 1201.1 Algebra and Number Theory ; 1201.4 Applied Mathematics ; 1201.14 Geometry and Topology |
原始文献类型 | Conference article (CA) |
文献类型 | 会议论文 |
条目标识符 | https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/523920 |
专题 | 信息科学与技术学院_硕士生 |
通讯作者 | Wang, Ying |
作者单位 | 1.Institute of Computing Technology, Cas, University of Chinese Academy of Sciences, Beijing, China; 2.North China Electric Power University, Beijing, China; 3.ShanghaiTech University, Shanghai, China; 4.Institute of Computing Technology, Cas, Beijing, China |
推荐引用方式 GB/T 7714 | Liu, Lian,Cheng, Long,Ren, Haimeng,et al. COMET: Towards Practical W4A4KV4 LLMs Serving[C]//ACM SIGARCH, ACM SIGOPS, ACM SIGPLAN. 1601 Broadway, 10th Floor, NEW YORK, NY, UNITED STATES:Association for Computing Machinery,2025:131-146. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 |
修改评论
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。