--- title: README emoji: 👁 colorFrom: purple colorTo: green sdk: static pinned: false --- # HuggingFaceTB This is the home for smol models (SmolLM) and high quality pre-training datasets. We released: - [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu): a filtered version of FineWeb dataset for educational content, paper available [here](https://huggingface.co/papers/2406.17557). - [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia): the largest open synthetic dataset, with 25B tokens and 30M samples. It contains synthetic textbooks, blog posts, and stories, posts generated by Mixtral. Blog post available [here](https://huggingface.co/blog/cosmopedia). - [Smollm-Corpus](https://huggingface.co/datasets/HuggingFaceTB/smollm-corpus): the pre-training corpus of SmolLM: **Cosmopedia v0.2**, **FineWeb-Edu dedup** and **Python-Edu**. Blog post available [here](https://huggingface.co/blog/smollm). - [SmolLM2 models](https://huggingface.co/collections/HuggingFaceTB/smollm2-checkpoints-6723884218bcda64b34d7db9): a series of strong small models in three sizes: 135M, 360M and 1.7B - [SmolVLM](https://huggingface.co/HuggingFaceTB/SmolVLM-Instruct): a 2 billion Vision Language Model (VLM) built for on-device inference. It uses SmolLM2-1.7B as a language backbone. Blog post available [here](https://huggingface.co/blog/smolvlm). - [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath): the best public math pretraining dataset with 50B tokens of mathematical and problem solving data. **News 🗞️** - **FineMath**: the best public math pretraining dataset with 50B tokens of mathematical and problem solving data https://huggingface.co/datasets/HuggingFaceTB/finemath