Various Lengths, Constant Speed: Efficient Language Modeling with Lightning Attention Paper • 2405.17381 • Published May 27, 2024
MiniMax-01: Scaling Foundation Models with Lightning Attention Paper • 2501.08313 • Published 4 days ago • 258
CO2: Efficient Distributed Training with Full Communication-Computation Overlap Paper • 2401.16265 • Published Jan 29, 2024 • 1
Neural Architecture Search on Efficient Transformers and Beyond Paper • 2207.13955 • Published Jul 28, 2022 • 1
MiniMax-01: Scaling Foundation Models with Lightning Attention Paper • 2501.08313 • Published 4 days ago • 258
Lightning Attention-2: A Free Lunch for Handling Unlimited Sequence Lengths in Large Language Models Paper • 2401.04658 • Published Jan 9, 2024 • 26