File size: 440 Bytes
9fec59f 9a24586 9fec59f 9a24586 |
1 2 3 4 5 6 7 8 9 10 |
---
library_name: transformers
license: mit
datasets:
- argilla/ultrafeedback-binarized-preferences-cleaned
base_model:
- RLHFlow/LLaMA3-SFT-v2
---
This is the token-wise reward based ppo model introduced in the preprint **Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models** (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO. |