--- library_name: transformers license: mit datasets: - hendrydong/preference_700K base_model: - microsoft/Phi-3-mini-4k-instruct pipeline_tag: text-classification --- # phi-instruct-segment Model Card - **Paper:** [Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model ](https://arxiv.org/abs/2501.02790) - **Model:** [yyqoni/Phi-3-mini-4k-instruct-segment-rm-700k](https://huggingface.co/yyqoni/Phi-3-mini-4k-instruct-segment-rm-700k) ## Method The segment reward model assigns rewards to semantically meaningful text segments, segmented dynamically with an entropy-based threshold. It is trained on binary preference labels from human feedback, optimizing a Bradley-Terry loss function that aggregates segment rewards using the average function. ## Architecture