yyqoni's picture
Update README.md
7556380 verified
metadata
library_name: transformers
license: mit
datasets:
  - hendrydong/preference_700K
base_model:
  - RLHFlow/LLaMA3-SFT-v2
pipeline_tag: text-classification

rlhflow-llama-3-sft-segment Model Card

Method

The segment reward model assigns rewards to semantically meaningful text segments, segmented dynamically with an entropy-based threshold. It is trained on binary preference labels from human feedback, optimizing a Bradley-Terry loss function that aggregates segment rewards using the average function.

Architecture

image/png

Training

The phi-instruct-segment model is fine-tuned from RLHFlow/LLaMA3-SFT-v2 on the hendrydong/preference_700K dataset.

Citation

If you find this model or our research useful, please consider citing our paper:

@misc{yin2025segmentingtextlearningrewards,
      title={Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model}, 
      author={Yueqin Yin and Shentao Yang and Yujia Xie and Ziyi Yang and Yuting Sun and Hany Awadalla and Weizhu Chen and Mingyuan Zhou},
      year={2025},
      eprint={2501.02790},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.02790},
}