view article Article Efficient LLM Pretraining: Packed Sequences and Masked Attention By sirluk • Oct 7, 2024 • 11
ModernBERT Collection Bringing BERT into modernity via both architecture changes and scaling • 3 items • Updated 18 days ago • 116
view article Article Fine-tuning LLMs to 1.58bit: extreme quantization made easy Sep 18, 2024 • 215
view article Article Training and Finetuning Embedding Models with Sentence Transformers v3 May 28, 2024 • 168
LLaMA Pro: Progressive LLaMA with Block Expansion Paper • 2401.02415 • Published Jan 4, 2024 • 53
LLM360: Towards Fully Transparent Open-Source LLMs Paper • 2312.06550 • Published Dec 11, 2023 • 57
Multimodal Foundation Models: From Specialists to General-Purpose Assistants Paper • 2309.10020 • Published Sep 18, 2023 • 40
Scaling Laws for Sparsely-Connected Foundation Models Paper • 2309.08520 • Published Sep 15, 2023 • 13
DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models Paper • 2309.03883 • Published Sep 7, 2023 • 34
Platypus: Quick, Cheap, and Powerful Refinement of LLMs Paper • 2308.07317 • Published Aug 14, 2023 • 24
Meta-Transformer: A Unified Framework for Multimodal Learning Paper • 2307.10802 • Published Jul 20, 2023 • 43
On the Origin of LLMs: An Evolutionary Tree and Graph for 15,821 Large Language Models Paper • 2307.09793 • Published Jul 19, 2023 • 47
PolyLM: An Open Source Polyglot Large Language Model Paper • 2307.06018 • Published Jul 12, 2023 • 25
Extending Context Window of Large Language Models via Positional Interpolation Paper • 2306.15595 • Published Jun 27, 2023 • 53