yyqoni's picture
Update README.md
9a24586 verified
|
raw
history blame
440 Bytes
metadata
library_name: transformers
license: mit
datasets:
  - argilla/ultrafeedback-binarized-preferences-cleaned
base_model:
  - RLHFlow/LLaMA3-SFT-v2

This is the token-wise reward based ppo model introduced in the preprint Segmenting Text and Learning Their Rewards for Improved RLHF in Language Models (https://arxiv.org/abs/2501.02790). For more details, please visit our repository at https://github.com/yinyueqin/DenseRewardRLHF-PPO.