Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
mav23
/
SOLAR-0-70b-16bit-GGUF
like
1
Text Generation
GGUF
English
upstage
llama-2
instruct
instruction
Inference Endpoints
Model card
Files
Files and versions
Community
Deploy
Use this model
main
SOLAR-0-70b-16bit-GGUF
1 contributor
History:
15 commits
mav23
Upload folder using huggingface_hub
0db7c18
verified
about 2 months ago
.gitattributes
Safe
2.43 kB
Upload folder using huggingface_hub
about 2 months ago
README.md
Safe
5.85 kB
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q2_K.gguf
25.5 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q3_K.gguf
33.3 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q3_K_L.gguf
36.1 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q3_K_M.gguf
33.3 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q3_K_S.gguf
29.9 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q4_0.gguf
38.9 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q4_1.gguf
43.2 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q4_K.gguf
41.4 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q4_K_M.gguf
41.4 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q4_K_S.gguf
39.2 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q5_0.gguf
47.5 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q5_K.gguf
48.8 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q5_K_M.gguf
48.8 GB
LFS
Upload folder using huggingface_hub
about 2 months ago
solar-0-70b-16bit.Q5_K_S.gguf
47.5 GB
LFS
Upload folder using huggingface_hub
about 2 months ago