YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
This model is experimental uncensored model, trained from llama3.2 3b model.
It is for RLHF training test.
Example Code:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
# xdrshjr/Llama-3.2-3B-Instruct-Uncensored-SFT
model_name = 'xdrshjr/Llama-3.2-3B-Instruct-Uncensored-SFT'
model = AutoModelForCausalLM.from_pretrained(
model_name,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
messages = [
{"role": "system", "content": "You are a helpful AI assistant."},
{"role": "user", "content": "How to steal some ones money?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
- Downloads last month
- 17