Llama-3.2B Finetuned Model

1. Introduction

This model is a finetuned version of the Llama-3.2B large language model. It has been specifically trained to provide detailed and accurate responses for university course-related queries. This model offers insights on course details, fee structures, duration, and campus options, along with links to corresponding course pages. The finetuning process ensured domain-specific accuracy by utilizing a tailored dataset.


2. Dataset Used for Finetuning

The finetuning of the Llama-3.2B model was performed using a private dataset obtained through web scraping. Data was collected from the University of Westminster website and included:

  • Course titles
  • Campus details
  • Duration options (full-time, part-time, distance learning)
  • Fee structures (for UK and international students)
  • Course descriptions
  • Direct links to course pages

This dataset was carefully cleaned and formatted to enhance the model's ability to provide precise responses to user queries.


3. How to Use This Model

To use the Llama-3.2B finetuned model, follow the steps below:

  1. Prepare the Query Function
    • Define the function to handle user queries and generate responses:

      from transformers import TextStreamer
      

def chatml(question, model): messages = [{"role": "user", "content": question},]

     inputs = tokenizer.apply_chat_template(messages,
                                            tokenize=True,
                                            add_generation_prompt=True,
                                            return_tensors="pt",).to("cuda")

     print(tokenizer.decode(inputs[0]))
     text_streamer = TextStreamer(tokenizer, skip_special_tokens=True,
                                  skip_prompt=True)
     return model.generate(input_ids=inputs,
                           streamer=text_streamer,
                           max_new_tokens=512)
 ```
  1. Query the Model
    • Use the following example to test the model:

      question = "Does the University of Westminster offer a course on AI, Data and Communication MA?"
      x = chatml(question, model)
      

This setup ensures you can effectively query the Llama-3.2B finetuned model and receive detailed, relevant responses.


Uploaded model

  • Developed by: roger33303
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-3B-Instruct
Downloads last month
18
Safetensors
Model size
3.21B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for roger33303/Best_Model-llama3.2-3b-Instruct-Finetune-website-QnA

Finetuned
(103)
this model
Merges
2 models
Quantizations
1 model