BiLLM: Pushing the Limit of Post-Training Quantization for LLMs Paper • 2402.04291 • Published Feb 6, 2024 • 48
KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization Paper • 2401.18079 • Published Jan 31, 2024 • 7
Towards Next-Level Post-Training Quantization of Hyper-Scale Transformers Paper • 2402.08958 • Published Feb 14, 2024 • 3
OneBit: Towards Extremely Low-bit Large Language Models Paper • 2402.11295 • Published Feb 17, 2024 • 23