File size: 447 Bytes
b65aef0
 
cfab8c8
 
 
 
 
b65aef0
 
cfab8c8
1
2
3
4
5
6
7
8
9
10
---
library_name: transformers
license: mit
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
base_model:
- meta-llama/Llama-3.1-8B-Instruct
---

This is the bandit reward based ppo model introduced in the preprint **Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models** (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.