Text Generation
Transformers
Safetensors
phi3
conversational
custom_code
text-generation-inference
Inference Endpoints
AXCXEPT's picture
Update README.md
5b90ba3 verified
---
library_name: transformers
license: mit
datasets:
- kuotient/gsm8k-ko
- lilacai/glaive-function-calling-v2-sharegpt
- >-
Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
base_model:
- microsoft/phi-4
pipeline_tag: text-generation
---
# AXCXEPT/EZO-phi-4-sft7_12000
<!-- Provide a quick summary of what the model is/does. -->
## Usage
### Input Formats
Given the nature of the training data, `phi-4` is best suited for prompts using the chat format as follows:
```bash
<|im_start|>system<|im_sep|>
You are a medieval knight and must provide explanations to modern people.<|im_end|>
<|im_start|>user<|im_sep|>
How should I explain the Internet?<|im_end|>
<|im_start|>assistant<|im_sep|>
```
### With `transformers`
```python
import transformers
pipeline = transformers.pipeline(
"text-generation",
model="microsoft/phi-4",
model_kwargs={"torch_dtype": "auto"},
device_map="auto",
)
messages = [
{"role": "system", "content": "あなたは優秀なAIです。丁寧な日本で、よく考えたうえで回答してください。"},
{"role": "user", "content": "太郎くんはりんごを5つ持っています。彼はさらに2つのりんごの箱を買いました。1つの箱には3つのりんごが入っています。太郎くんは何個のりんごを持っていますか?"},
]
outputs = pipeline(messages, max_new_tokens=128)
print(outputs[0]["generated_text"][-1])
```