Still the best little guy for it's size, THANKS for the christmas present FBLGIT/miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF

This model was converted to GGUF format from fblgit/miniclaus-qw1.5B-UNAMGS using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Screenshot 2024-12-18 at 09-00-31 Ideogram.png

Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Intelligentestate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF --hf-file miniclaus-qw1.5b-unamgs-q8_0.gguf -p "The meaning to life and the universe is"

GPT4ALL/Ollama: use standard qwen templates/prompting opening context window for length

Downloads last month
23
GGUF
Model size
1.78B params
Architecture
qwen2

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for IntelligentEstate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF

Base model

Qwen/Qwen2.5-1.5B
Quantized
(3)
this model

Dataset used to train IntelligentEstate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF

Collection including IntelligentEstate/fblgit_miniclaus-qw1.5B-UNAMGS-Q8_0-GGUF