File size: 369 Bytes
a6760b2
 
42304f1
 
 
a6760b2
 
42304f1
1
2
3
4
5
6
7
8
---
library_name: transformers
license: mit
datasets:
- hendrydong/preference_700K
---

This is the token-wise reward model introduced in the preprint **Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models** (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.