COMET: Towards Partical W4A4KV4 LLMs Serving
2024-10-16
状态已发表
摘要

Quantization is a widely-used compression technology to reduce the overhead of serving large language models (LLMs) on terminal devices and in cloud data centers. However, prevalent quantization methods, such as 8-bit weight-activation or 4-bit weight-only quantization, achieve limited performance improvements due to poor support for low-precision (e.g., 4-bit) activation. This work, for the first time, realizes practical W4A4KV4 serving for LLMs, fully utilizing the INT4 tensor cores on modern GPUs and reducing the memory bottleneck caused by the KV cache. Specifically, we propose a novel fine-grained mixed-precision quantization algorithm (FMPQ) that compresses most activations into 4bit with negligible accuracy loss. To support mixed-precision matrix multiplication for W4A4 and W4A8, we develop a highly optimized W4Ax kernel. Our approach introduces a novel mixed-precision data layout to facilitate access and fast dequantization for activation and weight tensors, utilizing the GPU’s software pipeline to hide the overhead of data loading and conversion. Additionally, we propose fine-grained streaming multiprocessor (SM) scheduling to achieve load balance across different SMs. We integrate the optimized W4Ax kernel into our inference framework, COMET, and provide efficient management to support popular LLMs such as LLaMA-3-70B. Extensive evaluations demonstrate that, when running LLaMA family models on a single A100-80GSMX4, COMET achieves a kernel-level speedup of 2.88à over cuBLAS and a 2.02à throughput improvement compared to TensorRT-LLM from an end-to-end framework perspective.

DOIarXiv:2410.12168
相关网址查看原文
出处Arxiv
WOS记录号PPRN:114133659
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware& Architecture
文献类型预印本
条目标识符https://kms.shanghaitech.edu.cn/handle/2MSLDSTB/446056
专题信息科学与技术学院_硕士生
通讯作者Wang, Ying
作者单位
1.Univ Chinese Acad Sci, Inst Comp Technol, CAS, Beijing, Peoples R China
2.ShanghaiTech Univ, Shanghai, Peoples R China
3.North China Elect Power Univ, Beijing, Peoples R China
4.Chinese Acad Sci, Inst Comp Technol, State Key Lab Processors, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Liu, Lian,Ren, Haimeng,Cheng, Long,et al. COMET: Towards Partical W4A4KV4 LLMs Serving. 2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
个性服务
查看访问统计
谷歌学术
谷歌学术中相似的文章
[Liu, Lian]的文章
[Ren, Haimeng]的文章
[Cheng, Long]的文章
百度学术
百度学术中相似的文章
[Liu, Lian]的文章
[Ren, Haimeng]的文章
[Cheng, Long]的文章
必应学术
必应学术中相似的文章
[Liu, Lian]的文章
[Ren, Haimeng]的文章
[Cheng, Long]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。