Grammar Llama Model

Uploaded model

  • Developed by: kmaurinjones
  • License: apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-1B-Instruct-bnb-4bit
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "kmaurinjones/grammar-llama-3.2-1B"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto"
)

messages = [
    {
        "role": "system",
        "content": "",
    },
    {
        "role": "user",
        "content": """
        # Instructions
        - Revise the following transcript in the ways outlined below
        - Return exactly the revised text and nothing else
        - Use the custom vocabulary of terms to inform your revision
        
        # Revisions
        - Correct grammar
        - Correct punctuation
        - Correct spelling
        - Correct word choice
        
        # Vocabulary
        {custom vocabulary terms, separated by newline}
        
        # Transcript
        {transcript}
        """
    }
]

input_ids = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    return_tensors="pt"
).to(model.device)

terminators = [
    tokenizer.eos_token_id,
    tokenizer.convert_tokens_to_ids("<|eot_id|>")
]

outputs = model.generate(
    input_ids,
    # max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.1,
)
response = outputs[0]
print(tokenizer.decode(response))

# Example of Model Input
"""
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Cutting Knowledge Date: December 2023
Today Date: 26 July 2024

<|eot_id|><|start_header_id|>user<|end_header_id|>

# Instructions
- Revise the following transcript in the ways outlined below
- Return exactly the revised text and nothing else
- Use the custom vocabulary of terms to inform your revision

# Revisions
- Correct grammar
- Correct punctuation
- Correct spelling
- Correct word choice

# Vocabulary
Seljukid
Turbessel
Gaziantep
Ahlat

# Transcript
in 1111, he was invited to participate in a seljookid campaign. with his troops he joined the main seljookid army.  but during the siege of turbessle  he died in august 1111. his coffin was sent to ahlat.<|eot_id|><|start_header_id|>assistant<|end_header_id|>
"""

# Example of Model Output
"""
In 1111, he was invited to participate in a Seljukid campaign. With his troops he joined the main Seljukid army.  But during the siege of Turbessel  he died in August 1111. His coffin was sent to Ahlat.<|eot_id|>
"""
Downloads last month
44
GGUF
Model size
1.24B params
Architecture
llama

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kmaurinjones/grammar-llama-3.2-1B

Quantized
(118)
this model