This is a fine-tune of Qwen/Qwen1.5-0.5B on the gpjt/openassistant-guanaco-llama2-format, which in turn is a version of timdettmers/openassistant-guanaco adjusted to use my best guess at the Llama 2 prompt format (see the dataset card for more info).
I've written a series of blog posts describing my progress from essentially no knowledge of working with LLMs to being able to produce this model, and a similar fine-tune of meta-llama/Meta-Llama-3-8B:
- Fine-tuning a 0.5B model on my own machine.
- Doing the same, but in the cloud using Lambda Labs.
- Running some multi-GPU training, but using the GPUs to run larger batches for the 0.5B model -- which in turn means training faster -- rather than to train a larger model.
- Successfully fine-tuning the 8B model across multiple GPUs using ZeRO and DeepSpeed, but with the optimizer offloaded to CPU.
- Doing some initial experiments into memory usage for a 0.5B model locally to get some ideas as to why I had to offload the optimizer.
- Measuring memory usage more systematically for the 0.5B model, also locally, to find out how it behaves with different sequence lengths.
- Making similar measurements at different sequence lengths for the 8B model.
- Measuring the effect of batch sizes on memory usage, with a sidetrack into looking at Liger Kernel, a new and easy-to use replacement of the default CUDA kernels used for training that promises (and delivers) better memory usage and performance.
- Investigating how gradient checkpointing works, in the hope that it might allow me to trade off GPU processing for memory usage and get a larger batch size (meaning that each training iteration was slower, but the overall train took less time). Sadly, those hopes were dashed.
- Running the final fine-tune that produced this model
Sample code to use it:
import sys
import time
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
prompt_template = """
<s>[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.
If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
<</SYS>>
{question} [/INST]
{response}
"""
def ask_question(model, tokenizer, question):
pipe = pipeline(task="text-generation", model=model, tokenizer=tokenizer, max_new_tokens=2048)
prompt = prompt_template.format(question=question, response="")
tokens_in = len(tokenizer(prompt)["input_ids"])
start = time.time()
result = pipe(prompt)
end = time.time()
generated_text = result[0]['generated_text']
tokens_out = len(tokenizer(generated_text)["input_ids"])
print(generated_text)
tokens_generated = tokens_out - tokens_in
time_taken = end - start
tokens_per_second = tokens_generated / time_taken
print(f"{tokens_generated} tokens in {time_taken:.2f}s: {tokens_per_second:.2f} tokens/s)")
def test_model():
model_name = "Qwen1.5-0.5B-openassistant-guanaco-llama2-format"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="cuda", torch_dtype=torch.bfloat16)
question = input("You: ")
ask_question(model, tokenizer, question)
if __name__ == "__main__":
test_model()
- Downloads last month
- 23
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for gpjt/Qwen1.5-0.5B-openassistant-guanaco-llama2-format
Base model
Qwen/Qwen1.5-0.5B