File size: 3,702 Bytes
f735c1e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 |
---
license: llama2
base_model: meta-llama/Llama-2-7b-hf
tags:
- trl
- dpo
- generated_from_trainer
library_name: peft
model-index:
- name: Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-hf-DPO-LookAhead-5_TTree1.4_TT0.9_TP0.7_TE0.2_V6
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0184
- Rewards/chosen: -1.6627
- Rewards/rejected: -1.4611
- Rewards/accuracies: 0.5
- Rewards/margins: -0.2016
- Logps/rejected: -142.2372
- Logps/chosen: -159.6465
- Logits/rejected: -0.2970
- Logits/chosen: -0.3265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6746 | 0.3012 | 75 | 0.6658 | 0.0862 | 0.0321 | 0.75 | 0.0541 | -127.3055 | -142.1577 | 0.1821 | 0.1663 |
| 0.5925 | 0.6024 | 150 | 0.6506 | 0.1218 | 0.0304 | 0.5833 | 0.0914 | -127.3224 | -141.8020 | 0.1565 | 0.1401 |
| 0.7335 | 0.9036 | 225 | 0.7279 | -0.0626 | -0.0395 | 0.5 | -0.0231 | -128.0216 | -143.6459 | 0.1275 | 0.1103 |
| 0.6498 | 1.2048 | 300 | 0.7880 | -0.2917 | -0.2254 | 0.4167 | -0.0663 | -129.8807 | -145.9371 | 0.0678 | 0.0485 |
| 0.386 | 1.5060 | 375 | 0.7303 | -0.2014 | -0.2339 | 0.5 | 0.0325 | -129.9658 | -145.0339 | 0.0325 | 0.0140 |
| 0.2307 | 1.8072 | 450 | 0.8159 | -0.5206 | -0.4793 | 0.5 | -0.0412 | -132.4201 | -148.2257 | -0.0582 | -0.0797 |
| 0.1034 | 2.1084 | 525 | 0.9133 | -1.0254 | -0.8918 | 0.4167 | -0.1335 | -136.5451 | -153.2736 | -0.2025 | -0.2290 |
| 0.284 | 2.4096 | 600 | 1.0153 | -1.5972 | -1.3870 | 0.4167 | -0.2102 | -141.4962 | -158.9917 | -0.2790 | -0.3083 |
| 0.0599 | 2.7108 | 675 | 1.0184 | -1.6627 | -1.4611 | 0.5 | -0.2016 | -142.2372 | -159.6465 | -0.2970 | -0.3265 |
### Framework versions
- PEFT 0.12.0
- Transformers 4.44.0
- Pytorch 2.4.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1 |