fuzzy-mittenz
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ tags:
|
|
16 |
datasets:
|
17 |
- IntelligentEstate/The_Key
|
18 |
---
|
19 |
-
# IntelligentEstate/
|
20 |
This model is developed as a unique blend of inference and ability of coding on par with the new VL models of its size, (Also ONLY *Thinking* model without alignment) and primarily for use with GPT4ALL It excells in other applications and has reasoning capabilities(similar to QwQ/o1/03) inside it's interface with a unique javascript Tool Call function It was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp
|
21 |
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
|
22 |
|
|
|
16 |
datasets:
|
17 |
- IntelligentEstate/The_Key
|
18 |
---
|
19 |
+
# IntelligentEstate/ReasoningRabbit_QwenStar-7B-IQ4_XS-GGUF
|
20 |
This model is developed as a unique blend of inference and ability of coding on par with the new VL models of its size, (Also ONLY *Thinking* model without alignment) and primarily for use with GPT4ALL It excells in other applications and has reasoning capabilities(similar to QwQ/o1/03) inside it's interface with a unique javascript Tool Call function It was converted to GGUF format using "THE_KEY" dataset for importace matrix Qantization from [`WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B`](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) using llama.cpp
|
21 |
Refer to the [original model card](https://huggingface.co/WhiteRabbitNeo/WhiteRabbitNeo-2.5-Qwen-2.5-Coder-7B) for more details on the model.
|
22 |
|