view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention By sirluk • Oct 7, 2024 • 11
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated 18 days ago • 116