File size: 1,043 Bytes
e384d1d f4e7a24 e384d1d 86cb4f5 d1b4af9 d7ab4aa 00f55c2 d7ab4aa d1b4af9 d7ab4aa 00f55c2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
---
datasets:
- NovaSky-AI/Sky-T1_data_17k
base_model:
- win10/Phi-4-llama-t1-lora
license: mit
---
Full merged 16bit model of [win10/Phi-4-llama-t1-lora](https://huggingface.co/win10/Phi-4-llama-t1-lora), please always thank the original author for all the hardwork!!! All I did is the simple merging work on colab.
Run with Pytorch
```python
import transformers
pipeline = transformers.pipeline(
"text-generation",
model="benhaotang/Phi-4-llama-t1-full",
tokenizer=tokenizer,
device_map="auto",
)
messages = [
{"role": "system", "content": "You are a helpful AI asistent. You always think step by step."},
{"role": "user", "content": "Give me a short intodcution to renormalization group(RG) flow in physcis?"},
]
outputs = pipeline(messages, max_new_tokens=128)
print(outputs[0]["generated_text"])
```
Or can do static GGUF version of quants: [benhaotang/Phi-4-llama-t1-full](https://huggingface.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF)
```
ollama run hf.co/benhaotang/Phi-4-llama-t1-full-Q4_K_M-GGUF
``` |