QuantFlex Banner

GGUF Quants for: MicroThinker-1B-Preview

Model by: huihui-ai (thank you!)

Quants by: quantflex

Run with llama.cpp:

./llama-cli -m MicroThinker-1B-Preview-Q5_K_M.gguf -cnv -p "You are a helpful assistant. You should think step-by-step." --chat-template llama3

Downloads last month
99
GGUF
Model size
1.24B params
Architecture
llama

5-bit

6-bit

8-bit

32-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for quantflex/MicroThinker-1B-Preview-GGUF