ubaitur5/SmallThinker-3B-Preview-Q4-mlx
The Model ubaitur5/SmallThinker-3B-Preview-Q4-mlx was converted to MLX format from PowerInfer/SmallThinker-3B-Preview using mlx-lm version 0.20.5.
Use with mlx
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("ubaitur5/SmallThinker-3B-Preview-Q4-mlx")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
- Downloads last month
- 13
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for ubaitur5/SmallThinker-3B-Preview-Q4-mlx
Base model
Qwen/Qwen2.5-3B
Finetuned
Qwen/Qwen2.5-3B-Instruct
Finetuned
PowerInfer/SmallThinker-3B-Preview