Update README.md
Browse files
README.md
ADDED
@@ -0,0 +1,99 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- PKU-Alignment/PKU-SafeRLHF
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- reinforcement-learning-from-human-feedback
|
8 |
+
- reinforcement-learning
|
9 |
+
- beaver
|
10 |
+
- safety
|
11 |
+
- llama
|
12 |
+
- ai-safety
|
13 |
+
- deepspeed
|
14 |
+
- rlhf
|
15 |
+
- alpaca
|
16 |
+
library_name: safe-rlhf
|
17 |
+
---
|
18 |
+
|
19 |
+
# 🦫 Beaver's Reward Model
|
20 |
+
|
21 |
+
## Model Details
|
22 |
+
|
23 |
+
The Beaver reward model is a preference model trained using the [PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
|
24 |
+
It can play a role in the safe RLHF algorithm, helping the Beaver model become more helpful.
|
25 |
+
|
26 |
+
- **Developed by:** the [PKU-Alignment](https://github.com/PKU-Alignment) Team.
|
27 |
+
- **Model Type:** An auto-regressive language model based on the transformer architecture.
|
28 |
+
- **License:** Non-commercial license.
|
29 |
+
- **Fine-tuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971), [Alpaca](https://github.com/tatsu-lab/stanford_alpaca).
|
30 |
+
|
31 |
+
## Model Sources
|
32 |
+
|
33 |
+
- **Repository:** <https://github.com/PKU-Alignment/safe-rlhf>
|
34 |
+
- **Beaver:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0>
|
35 |
+
- **Dataset:** <https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF>
|
36 |
+
- **Reward Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-reward>
|
37 |
+
- **Cost Model:** <https://huggingface.co/PKU-Alignment/beaver-7b-v2.0-cost>
|
38 |
+
- **Dataset Paper:** <https://arxiv.org/abs/2307.04657>
|
39 |
+
- **Paper:** <https://arxiv.org/abs/2310.12773>
|
40 |
+
|
41 |
+
## How to Use the Reward Model
|
42 |
+
|
43 |
+
```python
|
44 |
+
import torch
|
45 |
+
from transformers import AutoTokenizer
|
46 |
+
from safe_rlhf.models import AutoModelForScore
|
47 |
+
|
48 |
+
model = AutoModelForScore.from_pretrained('PKU-Alignment/beaver-7b-v2.0-reward', torch_dtype=torch.bfloat16, device_map='auto')
|
49 |
+
tokenizer = AutoTokenizer.from_pretrained('PKU-Alignment/beaver-7b-v2.0-reward')
|
50 |
+
|
51 |
+
input = 'BEGINNING OF CONVERSATION: USER: hello ASSISTANT:Hello! How can I help you today?'
|
52 |
+
|
53 |
+
input_ids = tokenizer(input, return_tensors='pt')
|
54 |
+
output = model(**input_ids)
|
55 |
+
print(output)
|
56 |
+
|
57 |
+
# ScoreModelOutput(
|
58 |
+
# scores=tensor([[[-5.5000],
|
59 |
+
# [-0.1650],
|
60 |
+
# [-4.0625],
|
61 |
+
# [-0.0522],
|
62 |
+
# [-1.0859],
|
63 |
+
# [-0.4277],
|
64 |
+
# [-2.3750],
|
65 |
+
# [-2.5781],
|
66 |
+
# [-1.0859],
|
67 |
+
# [-1.1250],
|
68 |
+
# [-0.3809],
|
69 |
+
# [-1.0000],
|
70 |
+
# [-1.2344],
|
71 |
+
# [-0.7344],
|
72 |
+
# [-1.3438],
|
73 |
+
# [-1.2578],
|
74 |
+
# [-0.4883],
|
75 |
+
# [-1.1953],
|
76 |
+
# [-1.1953],
|
77 |
+
# [ 0.0908],
|
78 |
+
# [-0.8164],
|
79 |
+
# [ 0.1147],
|
80 |
+
# [-0.1650],
|
81 |
+
# [-0.4238],
|
82 |
+
# [ 0.3535],
|
83 |
+
# [ 1.2969],
|
84 |
+
# [ 0.7461],
|
85 |
+
# [ 1.8203]]], grad_fn=<ToCopyBackward0>),
|
86 |
+
# end_scores=tensor([[1.8203]], grad_fn=<ToCopyBackward0>),
|
87 |
+
# last_hidden_state=tensor([[[ 0.4766, -0.1787, -0.5312, ..., -0.0194, 0.2773, 0.7500],
|
88 |
+
# [ 0.5625, 2.0000, 0.8438, ..., 1.8281, 1.0391, -0.6914],
|
89 |
+
# [ 0.6484, 0.0388, -0.7227, ..., -0.4688, 0.2754, -1.4688],
|
90 |
+
# ...,
|
91 |
+
# [ 0.2598, 0.6758, -0.6289, ..., -1.0234, 0.5898, 1.4375],
|
92 |
+
# [ 1.7500, -0.0913, -1.1641, ..., -0.8438, 0.4199, 0.8945],
|
93 |
+
# [ 1.8516, -0.0684, -1.1094, ..., 0.1885, 0.4980, 1.1016]]],
|
94 |
+
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
|
95 |
+
# end_last_hidden_state=tensor([[ 1.8516, -0.0684, -1.1094, ..., 0.1885, 0.4980, 1.1016]],
|
96 |
+
# dtype=torch.bfloat16, grad_fn=<ToCopyBackward0>),
|
97 |
+
# end_index=tensor([27])
|
98 |
+
# )
|
99 |
+
```
|