Text Generation
Transformers
Safetensors
mixtral
Mixture of Experts
frankenmoe
Merge
mergekit
lazymergekit
M4-ai/TinyMistral-248M-v2-cleaner
Locutusque/TinyMistral-248M-Instruct
jtatman/tinymistral-v2-pycoder-instuct-248m
Locutusque/TinyMistral-248M-v2-Instruct
Eval Results
text-generation-inference
Inference Endpoints
TinyMistral-248Mx4-MOE
TinyMistral-248Mx4-MOE is a Mixure of Experts (MoE) made with the following models using LazyMergekit:
- M4-ai/TinyMistral-248M-v2-cleaner
- Locutusque/TinyMistral-248M-Instruct
- jtatman/tinymistral-v2-pycoder-instuct-248m
- Locutusque/TinyMistral-248M-v2-Instruct
🧩 Configuration
base_model: Locutusque/TinyMistral-248M-v2-Instruct
gate_mode: hidden
dtype: bfloat16
experts:
- source_model: M4-ai/TinyMistral-248M-v2-cleaner
positive_prompts:
- "versatile"
- "helpful"
- "factual"
- "integrated"
- "adaptive"
- "comprehensive"
- "balanced"
negative_prompts:
- "specialized"
- "narrow"
- "focused"
- "limited"
- "specific"
- source_model: Locutusque/TinyMistral-248M-Instruct
positive_prompts:
- "creative"
- "chat"
- "discuss"
- "culture"
- "world"
- "expressive"
- "detailed"
- "imaginative"
- "engaging"
negative_prompts:
- "sorry"
- "cannot"
- "factual"
- "concise"
- "straightforward"
- "objective"
- "dry"
- source_model: jtatman/tinymistral-v2-pycoder-instuct-248m
positive_prompts:
- "analytical"
- "accurate"
- "logical"
- "knowledgeable"
- "precise"
- "calculate"
- "compute"
- "solve"
- "work"
- "python"
- "javascript"
- "programming"
- "algorithm"
- "tell me"
- "assistant"
negative_prompts:
- "creative"
- "abstract"
- "imaginative"
- "artistic"
- "emotional"
- "mistake"
- "inaccurate"
- source_model: Locutusque/TinyMistral-248M-v2-Instruct
positive_prompts:
- "instructive"
- "clear"
- "directive"
- "helpful"
- "informative"
negative_prompts:
- "exploratory"
- "open-ended"
- "narrative"
- "speculative"
- "artistic"
💻 Usage
!pip install -qU transformers bitsandbytes accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "222gate/TinyMistral-248Mx4-MOE"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)
messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 30.08 |
AI2 Reasoning Challenge (25-Shot) | 29.52 |
HellaSwag (10-Shot) | 25.71 |
MMLU (5-Shot) | 24.82 |
TruthfulQA (0-shot) | 48.66 |
Winogrande (5-shot) | 51.78 |
GSM8k (5-shot) | 0.00 |
- Downloads last month
- 18
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for gate369/TinyMistral-248Mx4-MOE-not-tuned-pls-help
Merge model
this model
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard29.520
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard25.710
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard24.820
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard48.660
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard51.780
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard0.000