--- library_name: transformers license: mit datasets: - hendrydong/preference_700K base_model: - RLHFlow/LLaMA3-SFT-v2 pipeline_tag: text-classification --- # rlhflow-llama-3-sft-segment Model Card - **Paper:** [Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model ](https://arxiv.org/abs/2501.02790) - **Model:** [yyqoni/rlhflow-llama-3-sft-8b-v2-segment-rm-700k](https://huggingface.co/yyqoni/rlhflow-llama-3-sft-8b-v2-segment-rm-700k) ## Method The segment reward model assigns rewards to semantically meaningful text segments, segmented dynamically with an entropy-based threshold. It is trained on binary preference labels from human feedback, optimizing a Bradley-Terry loss function that aggregates segment rewards using the average function. ## Architecture