image/png

SMOLLM CoT 360M GGUF ON CUSTOM SYNTHETIC DATA

SmolLM2 is a family of compact language models available in three size: 135M, 360M, and 1.7B parameters. They are capable of solving a wide range of tasks while being lightweight enough to run on-device. Fine-tuning a language model like SmolLM involves several steps, from setting up the environment to training the model and saving the results. Below is a detailed step-by-step guide based on the provided notebook file

How to use with Transformers

pip install transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
checkpoint = "prithivMLmods/SmolLM2-CoT-360M"

device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
# for multiple GPUs install accelerate and do `model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto")`
model = AutoModelForCausalLM.from_pretrained(checkpoint).to(device)

messages = [{"role": "user", "content": "What is the capital of France."}]
input_text=tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer.encode(input_text, return_tensors="pt").to(device)
outputs = model.generate(inputs, max_new_tokens=50, temperature=0.2, top_p=0.9, do_sample=True)
print(tokenizer.decode(outputs[0]))

Step 1: Setting Up the Environment

Before diving into fine-tuning, you need to set up your environment with the necessary libraries and tools.

  1. Install Required Libraries:

    • Install the necessary Python libraries using pip. These include transformers, datasets, trl, torch, accelerate, bitsandbytes, and wandb.
    • These libraries are essential for working with Hugging Face models, datasets, and training loops.
    !pip install transformers datasets trl torch accelerate bitsandbytes wandb
    
  2. Import Necessary Modules:

    • Import the required modules from the installed libraries. These include AutoModelForCausalLM, AutoTokenizer, TrainingArguments, pipeline, load_dataset, and SFTTrainer.
    from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, pipeline
    from datasets import load_dataset
    from trl import SFTConfig, SFTTrainer, setup_chat_format
    import torch
    import os
    
  3. Detect Device (GPU, MPS, or CPU):

    • Detect the available hardware (GPU, MPS, or CPU) to ensure the model runs on the most efficient device.
    device = (
        "cuda"
        if torch.cuda.is_available()
        else "mps" if torch.backends.mps.is_available() else "cpu"
    )
    

Step 2: Load the Pre-trained Model and Tokenizer

Next, load the pre-trained SmolLM model and its corresponding tokenizer.

  1. Load the Model and Tokenizer:

    • Use AutoModelForCausalLM and AutoTokenizer to load the SmolLM model and tokenizer from Hugging Face.
    model_name = "HuggingFaceTB/SmolLM2-360M"
    model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
    tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)
    
  2. Set Up Chat Format:

    • Use the setup_chat_format function to prepare the model and tokenizer for chat-based tasks.
    model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)
    
  3. Test the Base Model:

    • Test the base model with a simple prompt to ensure it’s working correctly.
    prompt = "Explain AGI ?"
    pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0 if device == "cuda" else -1)
    print(pipe(prompt, max_new_tokens=200))
    
  4. If: Encountering:

    • Chat template is already added to the tokenizer, indicates that the tokenizer already has a predefined chat template, which prevents the setup_chat_format() from modifying it again.
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

model_name = "HuggingFaceTB/SmolLM2-1.7B-Instruct"
model = AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path=model_name)
tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=model_name)

tokenizer.chat_template = None

from trl.models.utils import setup_chat_format
model, tokenizer = setup_chat_format(model=model, tokenizer=tokenizer)

prompt = "Explain AGI?"
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, device=0)
print(pipe(prompt, max_new_tokens=200))

📍 Else Skip the Part [ Step 4 ] !


Step 3: Load and Prepare the Dataset

Fine-tuning requires a dataset. In this case, we’re using a custom dataset called Deepthink-Reasoning.

image/png

  1. Load the Dataset:

    • Use the load_dataset function to load the dataset from Hugging Face.
    ds = load_dataset("prithivMLmods/Deepthink-Reasoning")
    
  2. Tokenize the Dataset:

    • Define a tokenization function that processes the dataset in batches. This function applies the chat template to each prompt-response pair and tokenizes the text.
    def tokenize_function(examples):
        prompts = [p.strip() for p in examples["prompt"]]
        responses = [r.strip() for r in examples["response"]]
        texts = [
            tokenizer.apply_chat_template(
                [{"role": "user", "content": p}, {"role": "assistant", "content": r}],
                tokenize=False
            )
            for p, r in zip(prompts, responses)
        ]
        return tokenizer(texts, truncation=True, padding="max_length", max_length=512)
    
  3. Apply Tokenization:

    • Apply the tokenization function to the dataset.
    ds = ds.map(tokenize_function, batched=True)
    

Step 4: Configure Training Arguments

Set up the training arguments to control the fine-tuning process.

  1. Define Training Arguments:

    • Use TrainingArguments to specify parameters like batch size, learning rate, number of steps, and optimization settings.
    use_bf16 = torch.cuda.is_bf16_supported()
    training_args = TrainingArguments(
        per_device_train_batch_size=2,
        gradient_accumulation_steps=4,
        warmup_steps=5,
        max_steps=60,
        learning_rate=2e-4,
        fp16=not use_bf16,
        bf16=use_bf16,
        logging_steps=1,
        optim="adamw_8bit",
        weight_decay=0.01,
        lr_scheduler_type="linear",
        seed=3407,
        output_dir="outputs",
        report_to="wandb",
    )
    

Step 5: Initialize the Trainer

Initialize the SFTTrainer with the model, tokenizer, dataset, and training arguments.

trainer = SFTTrainer(
    model=model,
    processing_class=tokenizer,
    train_dataset=ds["train"],
    args=training_args,
)

Step 6: Start Training

Begin the fine-tuning process by calling the train method on the trainer.

trainer.train()

Step 7: Save the Fine-Tuned Model

After training, save the fine-tuned model and tokenizer to a local directory.

  1. Save Model and Tokenizer:

    • Use the save_pretrained method to save the model and tokenizer.
    save_directory = "/content/my_model"
    model.save_pretrained(save_directory)
    tokenizer.save_pretrained(save_directory)
    
  2. Zip and Download the Model:

    • Zip the saved directory and download it for future use.
    import shutil
    shutil.make_archive(save_directory, 'zip', save_directory)
    
    from google.colab import files
    files.download(f"{save_directory}.zip")
    

Run with Ollama [Ollama Run]

Ollama makes running machine learning models simple and efficient. Follow these steps to set up and run your GGUF models quickly.

Quick Start: Step-by-Step Guide

Step Description Command / Instructions
1 Install Ollama 🦙 Download Ollama from https://ollama.com/download and install it on your system.
2 Create Your Model File - Create a file named after your model, e.g., metallama.
- Add the following line to specify the base model:
```bash
FROM Llama-3.2-1B.F16.gguf
```
- Ensure the base model file is in the same directory.
3 Create and Patch the Model Run the following commands to create and verify your model:
```bash
ollama create metallama -f ./metallama
ollama list
```
4 Run the Model Use the following command to start your model:
```bash
ollama run metallama
```
5 Interact with the Model Once the model is running, interact with it:
```plaintext
>>> Tell me about Space X.
Space X, the private aerospace company founded by Elon Musk, is revolutionizing space exploration...
```

Model & Quant

Item Link
Model SmolLM2-CoT-360M
Quantized Version SmolLM2-CoT-360M-GGUF

Conclusion

Fine-tuning SmolLM involves setting up the environment, loading the model and dataset, configuring training parameters, and running the training loop. By following these steps, you can adapt SmolLM to your specific use case, whether it’s for reasoning tasks, chat-based applications, or other NLP tasks.

This process is highly customizable, so feel free to experiment with different datasets, hyperparameters, and training strategies to achieve the best results for your project.


Downloads last month
95
GGUF
Model size
362M params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for prithivMLmods/SmolLM2-CoT-360M-GGUF

Quantized
(3)
this model

Dataset used to train prithivMLmods/SmolLM2-CoT-360M-GGUF