File size: 7,786 Bytes
81554fc 25ddc0a 5e0d928 81554fc 25ddc0a 5e0d928 3110d47 5e0d928 3110d47 d3ed0cf 81554fc 5b3a032 d9fe9bd 5fe9bdf 8133173 c253020 5e0d928 3110d47 25ddc0a 736b696 25ddc0a 9ce02d4 2451c40 3110d47 2451c40 7b10191 2451c40 f4ee896 81554fc 25ddc0a f4ee896 81554fc d3ed0cf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 |
---
license: mit
license_link: https://huggingface.co/microsoft/phi-4/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- phi
- nlp
- math
- code
- chat
- conversational
- phi3
inference:
parameters:
temperature: 0
widget:
- messages:
- role: user
content: How many R's in strawberry? Think step by step.
library_name: transformers
datasets:
- amphora/QwQ-LongCoT-130K
base_model:
- microsoft/phi-4
model-index:
- name: SuperThoughts-CoT-14B-16k-o1-QwQ
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: wis-k/instruction-following-eval
split: train
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 5.15
name: averaged accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: SaylorTwift/bbh
split: test
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 52.85
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: lighteval/MATH-Hard
split: test
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 40.79
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
split: train
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 19.02
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 21.79
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.43
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ
name: Open LLM Leaderboard
---
Please note, the low IFEVAL results is due to this model always reasoning, instruction following is limited, which caused it to have very low ifeval results, this should not matter for most use cases.
gguf/final version: https://huggingface.co/Pinkstack/PARM-V2-phi-4-16k-CoT-o1-gguf
This model can be merged with phi-4 based LLMs!
other gguf version: mradermacher/SuperThoughts-CoT-14B-16k-o1-QwQ-GGUF
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/QDHJhI0EVT_L9AHY_g3Br.png)
[Phi-4 Technical Report](https://arxiv.org/pdf/2412.08905)
Phi-4 that has been tuned to be more advanced at reasoning.
Unlike other Parm models we had to optimize our fine tuning process to ensure accuracy while still being able to release this model. **Training loss: 0.443800**
Beats qwen/qwq at MATH & MuSR & GPQA (MuSR being a reasoning benchmark)
Evaluation:
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/csbdGKzGcDVMPRqMCoH8D.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/HR9WtjBhE4h6wrq88FLAf.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/GLt4ct4yAVMvYEpoYO5o6.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/CP9UF9kdBT_SW8Q79PSui.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/doEIqDrM639hRPSg_J6AF.png)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/yl5Et2TkCoYuIrNpDhZu9.png)
the model uses this prompt format: (modified phi-4 prompt)
```
{{ if .System }}<|system|>
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|im_end|>
{{ end }}<|assistant|>{{ .CoT }}<|CoT|>
{{ .Response }}<|FinalAnswer|><|im_end|>
```
It is recommended to use a system prompt like this one:
```
You are a helpful ai assistant. Make sure to put your finalanswer at the end.
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Pinkstack__SuperThoughts-CoT-14B-16k-o1-QwQ-details)!
Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=Pinkstack%2FSuperThoughts-CoT-14B-16k-o1-QwQ&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
| Metric |Value (%)|
|-------------------|--------:|
|**Average** | 31.17|
|IFEval (0-Shot) | 5.15|
|BBH (3-Shot) | 52.85|
|MATH Lvl 5 (4-Shot)| 40.79|
|GPQA (0-shot) | 19.02|
|MuSR (0-shot) | 21.79|
|MMLU-PRO (5-shot) | 47.43|
# 🧀 Examples:
(q4_k_m, 10GB rtx 3080, 64GB memory, running inside of MSTY, all use "You are a friendly ai assistant." as the System prompt.)
**example 1:**
![example1](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/NoLJREYFU8LdMwynyLLMG.png)
**example 2:**
![2](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/uboFipmS1ulfxeDgMBsBH.png)
**example 3:**
![example2](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/c4h-nw0DPTrQgX-_tvBoT.png)
**example 4:**
![example1part1.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/Dcd6-wbpDQuXoulHaqATo.png)
![example1part2.png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/CoBYmYiRt9Z4IDFoOwHxc.png)
All generated locally and pretty quickly too! 😲 Due to our very limited resources we weren't able to evaluate this model (yet..) if you evaluate it please do let us know!
# 🧀 Information
- ⚠️ A low temperature must be used to ensure it won't fail at reasoning. we use 0.3 - 0.8!
- ⚠️ Due to the current prompt format, it may sometimes put <|FinalAnswer|> without providing a final answer at the end, you can ignore this or modify the prompt format.
- this is out flagship model, with top-tier reasoning, rivaling gemini-flash-exp-2.0-thinking and o1 mini. results are overall similar to both of them, we are not comparing to qwq as it has much longer results which waste tokens.
# Uploaded model
- **Developed by:** Pinkstack
- **License:** MIT
- **Finetuned from model :** microsoft/phi-4
This phi-4 model was trained with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
|