File size: 6,421 Bytes
214a4f4 74edb1a 214a4f4 74edb1a 214a4f4 74edb1a 59ea029 1a47aad 9f55d6e 59ea029 9206f0f 74edb1a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 |
---
base_model: NousResearch/Llama-2-7b-hf
tags:
- llama-2
- instruct
- finetune
- alpaca
- gpt4
- synthetic data
- distillation
datasets:
- teknium/openhermes
model-index:
- name: openhermes-7b
results: []
license: mit
language:
- en
---
# OpenHermes-7B
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ovkrkIIUwJ9azhPtW6dAb.png)
## Model description
OpenHermes 7B is the first fine tune of the Hermes dataset that has a fully open source dataset!
What is unique about this 7B model is that it used sample packing, which speeds up training by many multiples if the dataset token averages arent near the max sequence length.
OpenHermes was trained on 242,000 entries of primarily GPT-4 generated data, from open datasets across the AI landscape, including:
- GPTeacher - General Instruct, Roleplay v1, Roleplay v2, and Code Instruct Datasets, by Teknium
- WizardLM (v1, evol_instruct 70k), by WizardLM Team/nlpxucan
- Airoboros GPT-4 (v1.0), by JonDurbin
- Camel-AI's domain expert datasets, by the Camel-AI Team
- CodeAlpaca, by Sahil2801
- GPT4-LLM and Unnatural Instructions, by Microsoft
Filtering included removal of OpenAI refusals, disclaimers, and "As an AI" type examples and more
The base dataset mix the model was trained on is identical to Nous-Hermes', minus the Nous-Instruct and PDACTL datasets which were private datasets.
The WANDB Project is public and can be examined at this link: https://wandb.ai/teknium1/openhermes/runs/openhermes-v2-qlora-7b-packed
Huge thank you to [main_horse](https://twitter.com/main_horse) for compute access and a16z for sponsoring my work, and all the dataset creators and other people who's work has contributed to this project!
## Benchmark Information
## Benchmark Results
GPT-4All Benchmark Set
```
| Task |Version| Metric |Value | |Stderr|
|-------------|------:|--------|-----:|---|-----:|
|arc_challenge| 0|acc |0.4727|± |0.0146|
| | |acc_norm|0.4957|± |0.0146|
|arc_easy | 0|acc |0.7862|± |0.0084|
| | |acc_norm|0.7643|± |0.0087|
|boolq | 1|acc |0.7801|± |0.0072|
|hellaswag | 0|acc |0.5789|± |0.0049|
| | |acc_norm|0.7654|± |0.0042|
|openbookqa | 0|acc |0.3480|± |0.0213|
| | |acc_norm|0.4500|± |0.0223|
|piqa | 0|acc |0.7867|± |0.0096|
| | |acc_norm|0.7938|± |0.0094|
|winogrande | 0|acc |0.7048|± |0.0128|
Average: 0.679
```
BigBench:
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------------------------|------:|---------------------|-----:|---|-----:|
|bigbench_causal_judgement | 0|multiple_choice_grade|0.5000|± |0.0364|
|bigbench_date_understanding | 0|multiple_choice_grade|0.5908|± |0.0256|
|bigbench_disambiguation_qa | 0|multiple_choice_grade|0.3023|± |0.0286|
|bigbench_geometric_shapes | 0|multiple_choice_grade|0.1003|± |0.0159|
| | |exact_str_match |0.0000|± |0.0000|
|bigbench_logical_deduction_five_objects | 0|multiple_choice_grade|0.2520|± |0.0194|
|bigbench_logical_deduction_seven_objects | 0|multiple_choice_grade|0.1871|± |0.0148|
|bigbench_logical_deduction_three_objects | 0|multiple_choice_grade|0.3833|± |0.0281|
|bigbench_movie_recommendation | 0|multiple_choice_grade|0.2500|± |0.0194|
|bigbench_navigate | 0|multiple_choice_grade|0.5000|± |0.0158|
|bigbench_reasoning_about_colored_objects | 0|multiple_choice_grade|0.4370|± |0.0111|
|bigbench_ruin_names | 0|multiple_choice_grade|0.2679|± |0.0209|
|bigbench_salient_translation_error_detection | 0|multiple_choice_grade|0.2495|± |0.0137|
|bigbench_snarks | 0|multiple_choice_grade|0.5249|± |0.0372|
|bigbench_sports_understanding | 0|multiple_choice_grade|0.5406|± |0.0159|
|bigbench_temporal_sequences | 0|multiple_choice_grade|0.2470|± |0.0136|
|bigbench_tracking_shuffled_objects_five_objects | 0|multiple_choice_grade|0.1944|± |0.0112|
|bigbench_tracking_shuffled_objects_seven_objects| 0|multiple_choice_grade|0.1509|± |0.0086|
|bigbench_tracking_shuffled_objects_three_objects| 0|multiple_choice_grade|0.3833|± |0.0281|
Average: 0.3367
```
AGI Eval
```
| Task |Version| Metric |Value | |Stderr|
|------------------------------|------:|--------|-----:|---|-----:|
|agieval_aqua_rat | 0|acc |0.2441|± |0.0270|
| | |acc_norm|0.2402|± |0.0269|
|agieval_logiqa_en | 0|acc |0.2458|± |0.0169|
| | |acc_norm|0.2965|± |0.0179|
|agieval_lsat_ar | 0|acc |0.2522|± |0.0287|
| | |acc_norm|0.2130|± |0.0271|
|agieval_lsat_lr | 0|acc |0.2745|± |0.0198|
| | |acc_norm|0.2686|± |0.0196|
|agieval_lsat_rc | 0|acc |0.2900|± |0.0277|
| | |acc_norm|0.2379|± |0.0260|
|agieval_sat_en | 0|acc |0.4466|± |0.0347|
| | |acc_norm|0.3738|± |0.0338|
|agieval_sat_en_without_passage| 0|acc |0.3738|± |0.0338|
| | |acc_norm|0.3301|± |0.0328|
|agieval_sat_math | 0|acc |0.2318|± |0.0285|
| | |acc_norm|0.1864|± |0.0263|
Average: 0.2683
```
TruthfulQA:
```
hf-causal-experimental (pretrained=teknium/OpenHermes-7B,dtype=float16), limit: None, provide_description: False, num_fewshot: 0, batch_size: 8
| Task |Version|Metric|Value | |Stderr|
|-------------|------:|------|-----:|---|-----:|
|truthfulqa_mc| 1|mc2 |0.4542|± |0.0148|
```
## Training procedure
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/Vzy7Z4Qcwj4hGJcQ2BT20.png)
|