Model Details
meta-llama/Meta-Llama-3-8B
model finetuned on 100,000 CLRS-Text examples.
Training Details
- Learning Rate: 1e-4, 150 warmup steps then cosine decayed to 5e-06 using AdamW optimiser
- Batch size: 128
- Loss taken over answer only, not on question.
- Downloads last month
- 41
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for smcleish/clrs_llama_3_8b_100k_finetune_with_traces
Base model
meta-llama/Meta-Llama-3-8B