user
stringlengths
3
28
created_at
timestamp[us]
body
stringlengths
1
173k
issue_number
int64
1
2.54k
lvwerra
2021-08-09T08:02:36
Reinforcement learning is designed for sequential decision problems and thus works well for causal language modeling (such as GPT-2). BERT however does not fall in that category since it is a one-shot prediction and not a sequential prediction such as in language modeling. So I don't think it is straight forward to adapt this approach.
21
lvwerra
2021-08-09T08:06:53
As you can see later in the code the advantages are used for the loss calculations and not the returns: https://github.com/lvwerra/trl/blob/750f5fd5329bb81c79b00243c4c8923ac14981d5/trl/ppo.py#L240
19
lvwerra
2021-03-18T18:07:12
Yes, that is true - well spotted! I'll add it as a TODO.
18
lvwerra
2021-08-09T08:04:34
Interesting - must be an issue with the newer verisons of `pip`. Will likely drop the dependency to `simpletransformers` in the next release.
17
lvwerra
2022-01-01T16:29:25
Dropped `simpletransformers` requirement in #25.
17
vblagoje
2021-02-26T14:11:04
@lvwerra I tried this branch on both imdb ppo notebooks (the basic ppo sentiment training and the controlled sentiment ppo). They both work as expected, please try it as well. Let me know if any other checks should be done.
16
lvwerra
2021-02-26T14:55:54
awesome! did you also use weights and biases? in case you did, would you mind sharing the logs?
16
vblagoje
2021-02-26T16:04:18
Yes, I did but I deleted the first report for `04-gpt2-sentiment-ppo-training.ipynb`. Here is the report for [05-gpt2-sentiment-control.ipynb](https://wandb.ai/vblagoje/gpt2-ctrl/reports/05-gpt2-sentiment-control-ipyn--Vmlldzo0OTI4MjA?accessToken=0ogcb46btflg488lfuw1zu3j46sgsl3v83u45xdsloijmtfobav7dqmqq8s75trw)
16
lvwerra
2021-01-17T15:18:43
1. The model outputs predictions for the next token whereas the `log_probs` are the log probabilities for the current token. This simply aligns the two. 2. The main motivation was to decouple the generation from the training as much as possible. Since it takes a fraction of the time of the backward pass the speedup would be minimal. That way the PPOTrainer interface is cleaner. 3. That's possible. It could be that the `transformer` function `generate` handles this, but I had to implement my own, simple decoding function since the model would exploit several aspects of it. See the comments [here](https://github.com/lvwerra/trl/blob/master/nbs/01-gpt2-with-value-head.ipynb) about the custom response function. Feel free to make a PR if you can fix the weaknesses and improve the performance. Cheers, Leandro
15
lvwerra
2020-12-17T08:14:28
Hi! You can actually control these parameters. Later in the paper they also talk about dynamically adjusting beta. You can control this through the keyword arguments `"adap_kl_ctrl"` and `"init_kl_coef"` when initialising the `PPOTrainer`. You can also adjust the target KL-divergence through `"target"` and the windowing through `"horizon"` as well as all the PPO parameters (see [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L59)).
14
danyaljj
2020-12-04T23:27:52
Side note: it'd be good to update the `transformers` dependency to the latest (v4.0.0).
13
lvwerra
2020-12-17T08:18:50
You are right, when I have time I'll upgrade it to v4.0.0. I haven't tested it but I suspect if you take a model with a text generation head it should work. Note that you need add a value head to your model architecture (see [here](https://github.com/lvwerra/trl/blob/master/trl/gpt2.py)).
13
danyaljj
2020-12-18T00:33:25
I can try it. Other than running with no errors, what other ways I can test that the code is working fine? Is there a benchmark or a quantitative way of verifying the code?
13
lvwerra
2021-01-17T15:12:22
Monitoring the rewards on the IMDb dataset would be a good start. For GPT-2 it takes only 1-2h to train.
13
lvwerra
2020-11-07T22:41:01
Hi @zitterbewegung Could you share the stack trace? Has the machine access to the internet?
12
zitterbewegung
2020-11-07T23:33:34
The weird part is that this collab notebook won't crash https://colab.research.google.com/drive/1GE_riqtg4EiRt7BuAsrzKwTNDUIXCRMZ?usp=sharing
12
zitterbewegung
2020-11-07T23:41:06
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 450.51.05 Driver Version: 450.51.05 CUDA Version: 11.0 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 TITAN RTX On | 00000000:0B:00.0 On | N/A | | 41% 35C P8 15W / 280W | 361MiB / 24219MiB | 1% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 2878 G /usr/lib/xorg/Xorg 199MiB | | 0 N/A N/A 3206 G /usr/bin/gnome-shell 156MiB | | 0 N/A N/A 102465 G /usr/lib/firefox/firefox 3MiB | +-----------------------------------------------------------------------------+
12
zitterbewegung
2020-11-08T02:38:03
There was no stack trace and the machine had access to the internet Sent from my iPhone > On Nov 7, 2020, at 4:41 PM, Leandro von Werra <[email protected]> wrote: > >  > Hi @zitterbewegung > Could you share the stack trace? Has the machine access to the internet? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.
12
lvwerra
2020-11-08T13:00:33
How exactly did the code break? You did not get any error at all? This part of the code is not `trl` specific so it might be a `transformers` library issue. Have you tried just loading the tokenizer?
12
zitterbewegung
2020-11-08T13:39:22
I setup my environment on ubuntu 20.04 and also on AWS sagemaker using pytorch with python 3.6. I only get the error [segmentation fault]. Would it be safe to use a more recent version of transformers? On Sun, Nov 8, 2020 at 7:00 AM Leandro von Werra <[email protected]> wrote: > How exactly did the code break? You did not get any error at all? This > part of the code is not trl specific so it might be a transformers > library issue. Have you tried just loading the tokenizer? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/lvwerra/trl/issues/12#issuecomment-723574365>, or > unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAAHMOP72ZHBCX35EIMQCFDSO2I73ANCNFSM4TN5JRAQ> > . >
12
lvwerra
2020-11-08T15:57:36
I don't quite get what the `segmentation fault` is. Is that a AWS specific error or is it a Python error? If it is a Python error could you share the full error message? Loading pretrained models is possible in `transformers==2.6.0` but you could try to be sure. However, there are issues with the `trl` libraries in more recent versions (see #9).
12
lvwerra
2020-11-08T15:59:19
It could be related to this: https://github.com/huggingface/transformers/issues/4857
12
zitterbewegung
2020-11-08T18:09:22
Okay I guess it was occuring since i was using anaconda or an environment that had a higher version of transformers. Sent from my iPhone > On Nov 8, 2020, at 7:00 AM, Leandro von Werra <[email protected]> wrote: > >  > How exactly did the code break? You did not get any error at all? This part of the code is not trl specific so it might be a transformers library issue. Have you tried just loading the tokenizer? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe.
12
zitterbewegung
2020-11-08T18:21:54
Needed to use requirements.txt in a virtualenv instead of conda
12
lvwerra
2020-10-30T14:25:19
Hi @jpanaro, so first of all, the reward is only given once the sequence generation is complete which is why the score/reward is only added to the last token. You are right that then it is discounted and added to the previous tokens as well. You can find the equation in the original PPO [paper](https://arxiv.org/pdf/1707.06347.pdf) in equations (11)+(12). To simplify calculations it is done from back to front starting with the last token. It only makes sense to add the rewards to the last token since for example the BLEU score is only valid for the complete sequence. Similar to Atari games where you only get a reward after you complete a level and then the advantage equation is used to discount the rewards to previous actions. How strong the discount is depends on the value of `lambda`. Does that answer your question?
11
jpanaro
2020-10-30T14:58:10
That makes a lot more sense, thanks! I am just trying to resolve my negative kl-divergence problems which are causing my model to slowly diverge and produce garbage. Currently: 1. I have tried performing top_p_top_k filtering on the logits which has somewhat helped but I am limited by the fact that my models decoder produces all outputs and hidden states at once so I cannot perform the filtering as it unrolls, only after which I feel limits its effectiveness. 2. I have also tried to zero out all of the logits following the first EOS token generated but this just led to identical performance to the top_k_top_p filtering. I have also recently discovered that this might be pointless as when the logprobs are calculated the indices that used to be all zeros now are filled with nonzero values. 3. Lastly regardless of actual gt length my model will also produce logprobs to the max length (29) so my final consideration is to try and find the first EOS, add the reward to that indice, and then cut all following indices or set them to zero. Any thoughts on these methods or any solutions I may have missed I would greatly appreciate your input!
11
lvwerra
2020-10-30T15:07:52
I had issues with negative KL-divergence twice! Both times it was related to the generation function and the model found ways to exploit some functionality, such as the padding tokens or the fact that if the `min_length` is not yet reached the logprob of the `EOS` token is set to zero. The model can achieve negative KL-divergence by assigning astronomically small logprobs for the tokens that the generation function sets "manually". I tried to summarize this at the end of [this notebook](https://github.com/lvwerra/trl/blob/master/nbs/01-gpt2-with-value-head.ipynb). I hope this helps. My suggestion would be to use greedy decoding and then modify the reward in your code (e.g. adding an extra term to BLEU) if the EOS token appears too early. I hope this helps!
11
lvwerra
2020-10-30T15:10:56
As a remark: if you sample properly from your tuned model you should never achieve negative KL-divergence. This indicates that something is wrong in the way you generate the sequences.
11
jpanaro
2020-10-30T21:10:38
Yeah, I saw when I was reading through the notebooks how those issues cropped up. Since my model uses the JoeyNMT library and not the HuggingFace library I am sure there are some differences in generation so I guess I will have to find those differences myself. I will give that a go! I think if I can potentially penalize the early generation of the EOS token as well as the secondary problem of the model making too many periods prior to the EOS token. Regarding the last remark. When you say generate sequences do you mean how the model actually produce the decoder hidden states that compose the logits or do you mean things like how you produce the logprobs or how you apply a probability distribution to those logprobs (i.e. greedy vs categorical vs multinomial)?
11
lvwerra
2020-11-01T09:20:02
So the output of the model are logits and with a `softmax` function you can transform these into probabilities. If you just sample from these probabilities you should get positive KL-divergence. What the generation function (at least in the transformers library) do is applying extra tricks like overwriting some probabilities with 0. This leaves the model a backdoor to exploit by setting the probabilities of these overwritten outputs very low thus leading to negative KL-divergence which means positive rewards for the model. To avoid this I wrote a custom generation function [`respond_to_batch`](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/gpt2.py#L113) to have full control.
11
jpanaro
2020-11-03T08:19:06
I managed to fix the negative kl divergence problem. It turns out it was just a sampling alignment issue stemming from the greedy decoding. I Unfortunately my new problem is that my reward seems to want to decrease as the quality of the sentences are also decreasing. This leads to a direct hit to my BLEU score which means the reward can only reduce as the finetuning goes on. I think one of the issues is that the model I am starting with already has a majority of the training BLEU 4 scores in the 90->100 range so improvement on the training set is very difficult despite the BLEU 4 scores for the validation set being ~19 at the most. I tried casting all my BLEU scores in the 80->100 range to the range of -4.5->4.5 with scores less than 80 getting set to -4.0 to mimic your positive sentiment scores range as I'm fairly certain just using the raw scores in the loss calculation would blow the rewards out of proportion and also it naturally has no negative rewards so I didn;t think it would penalize the model for lower BLEU scores enough. That ended up looking like this: ![Screenshot from 2020-11-03 02-55-34](https://user-images.githubusercontent.com/49861328/97960806-1cbb7a00-1d80-11eb-93de-c855a3804e9e.png). I thought this would rectify the score decreasing issue but unfortunately the reward_mean still immediately sinks below 0 if it did not start there after the first epoch and then bounces between -0.05 and -0.15. I think part of the problem is my spiky KL value which now ranges from 1.5-16 and my lowish average score value which stabilizes at a little under 0.5 after about 10 epochs. Have you dealt with a positive KL but negative rewards similar to this matter?
11
lvwerra
2020-11-05T13:45:11
Unfortunately I have not encountered that specific problem. Within the PPO trainer the advantages are whitened, meaning the mean is set to zero and the standard deviation to one. The difference between training and validation score seem to indicate that you might be overfitting the training set. Have you tried reducing that? Also, you could try to decrease the KL factor and see if it improves the problem? Since you are directly measuring the quality of text with BLEU it might not be so important to constrain the language model.
11
jpanaro
2020-11-06T20:31:21
Ah, ok that makes sense, so when they are whitened, it does not matter how large the reward is, it will be distributed across the entire advantage equally? I'm just worried the massive numbers (92, 88, 100) will wash everything else out if I don't "dampen" them. Overfitting seems to be a major problem, possibly the main contributor to the lack of performance gain. I think I will cut the models learning process to fewer epochs and give PPO a chance to explore more solutions. This might be worth experimenting with seeing as I wan't the model to explore a little more anyways. Thanks for the tips!
11
lvwerra
2020-11-07T14:26:07
It will not be distributed but scaled down, such that the distribution in a batch is normalised. I think this should get rid off the characteristic scale of your scores in the PPO training. But you might want to check whether this is really true. If you also use weights&biases you can monitor all scores from the dashboard. You might also be able to see it in the loss scale and distribution. In any case good luck!
11
lvwerra
2020-10-24T13:23:15
Hi @jpanaro, glad you are still working on this project. See my answers for your questions below: 1. So the reason we run PPO for 4 epochs is that we want to make most of the data we gathered. If generating samples was cheap we could only train for one epoch. Also, we don't want to overfit the current batch so we only train for 4 epochs. Naturally, this may vary depending on your application and since it is a parameter you can easily experiment with other values. The reason you don't make the batch much bigger is that after training for a couple of epochs the data becomes out of date since the updated model would not actually generate this outputs anymore. Finally, the minibatch size is set to `1` since I could not train GPT-2 on a single GPU with larger numbers. The `forward_batch_size` is independent of the above mentioned considerations since it is just to conserve memory during the forward passes. If possible the most efficient way would be to set it to the `batch_size` 2. Actually, you calculate the advantages for each token in the response : `gen_len = response.shape[1]`. I have not tested what happens when the query is an empty tensor but if it breaks for example [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L128) it should be fairly easy to fix. The reason the query has to be masked is that this is given to the model and they should be ignored during training. I hope these comments help. Let me know if you have any more questions!
10
jpanaro
2020-10-24T19:50:24
1. That makes perfect sense, I was thinking it was a memory issue but I wanted to be certain that I wasn't missing an important specific design choice. 2. Ah, I meant response, I blame my tired brain sorry. I think I can just set gen_len equal to the caption length and it should be fine. I will experiment with that a little more to see if that works out. Your comments were very helpful! A few more small questions that cropped up: 1. When you load the GPT2 models are both the active model and the reference model the same pretrained model with the exception being the active model is the one we backpropagate the PPO loss through? 2. Have you experimented at all with a learning rate scheduler instead of just an optimizer? My base model utilizes a ReduceLrOnPlateau() scheduler so I was curious to know if you have tried something similar.
10
lvwerra
2020-10-25T12:17:19
1. Yes, you are correct. The reference models helps determine how far the active model's output distribution deviates from the reference model. The KL term in the reward makes sure the model stays close to the original distrbution. 2. I have not experimented with that, but might be worth checking out to gain the last few percent performance! Since the advantages are whitened (see [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L220)) it could be that the losses don't change as much as they would in a supervised setup. Let me know how it goes!
10
yanghoonkim
2021-05-22T04:52:16
> Finally, the minibatch size is set to 1 since I could not train GPT-2 on a single GPU with larger numbers. @lvwerra Was it ok for you to train GPT-2 with a single sentence batch every time? I Implemented a T5 version of trl (referring to your code) and found it does not work well... ( the reward fluctuates a lot and the generated result is also getting bad)
10
jayelm
2022-06-06T23:47:01
@lvwerra assuming memory is not an issue, do you expect the code to run fine if the minibatch size is set to something > 1?
10
lvwerra
2020-10-22T10:07:49
Hi @thak123! Thanks for the report. Last time I tested the library was with `transformers==2.6.0`. Can you try again with this version?
9
thak123
2020-10-22T14:39:42
Sure
9
lvwerra
2020-10-24T13:25:03
Any luck?
9
thak123
2020-10-28T09:50:03
Yes.Sorry didn't reply earlier but it worked fine. Thanks
9
lvwerra
2020-10-22T10:04:38
No, this would certainly be a great addition to the notebook. I will have a look at it. In the meantime feel free to create a PR if you implemented it already.
8
lvwerra
2020-08-20T14:16:35
Hi @jpanaro Glad you are interested in the library. Let's see if I understand correctly: Your input is a continuous stream of features from a steam of images. Now you want to process the series of features to a series of text which corresponds to the signs in the video. While the PPOTrainer is fairly general it is mainly tested for use in combination with GPT-2. This is a model that predicts the next word based on the previous words, usually referred to as autoregressive language modelling. Therefore, GPT-2 models the following probability distribution: <img src="https://render.githubusercontent.com/render/math?math=P(x_t \mid x_{t-1}, \ldots x_0)"> (The probability that word `x_t` appears after the words `x_0`, `x_1` etc.) I think for your use-case this architecture needs some modifications. One way I could imagine this could work is if you find a clever way to integrate you features into the context such that the model has the one follwoing objectives: One feature for the hole series: <img src="https://render.githubusercontent.com/render/math?math=P(x_t \mid x_{t-1}, \ldots x_0, f)"> One feature for each word: <img src="https://render.githubusercontent.com/render/math?math=P(x_t \mid x_{t-1}, \ldots x_0, f_{t-1}, \ldots f_0, )"> For t words there are n features: <img src="https://render.githubusercontent.com/render/math?math=P(x_t \mid x_{t-1}, \ldots x_0, f_{n-1}, \ldots f_0, )"> Which one of them applies really depends on how your input features and output text are aligned. In any case one way to modify the GPT-2 architecture for the extra features would be to enrich the embeddings of the input tokens (words) with your features. This happens [here](https://github.com/huggingface/transformers/blob/b3e54698dd701667ba1d06501a7a9e431c020863/src/transformers/modeling_gpt2.py#L594) in the `transformers` library. This is where the input tokens are also transformed to embeddings which you can regard as its input features. Alternatively you could try to use something like [this](https://openaccess.thecvf.com/content_cvpr_2018/papers/Zhou_End-to-End_Dense_Video_CVPR_2018_paper.pdf) architecture and then modify that to work with the PPOTrainer. Probably you just need to add a value estimation head like I did with GPT-2 which is needed for PPO (see [here](https://lvwerra.github.io/trl/01-gpt2-with-value-head/)). I have never done something like this so these are just suggestions. Let me know how it goes!
7
jpanaro
2020-08-23T17:12:28
Thank you for the quick response. To answer a few of your questions and comments: Yes, our input is the images or frames and the ground truths we are given come in the form of sentences for that sequence. I think the idea of integrating the features into the context for GPT-2 is really interesting but unfortunately I am on somewhat of a deadline and it looks as if I will have to explore that option later. Still a very unique approach! I do really like the idea of adapting some aspects of the video captioning model for use with PPOTrainer. You mentioned the addition of a value estimation head which appears to take a hidden state(s) and return a scalar for each one. I think this is well withing my ability and once I get the base transformer model up and running I will make best efforts to integrate it. Thank you for the idea! I do have a few small questions about the architecture of the PPOTrainer: - So throughout the trainer, the model input, or the querys and responses are at most used used to generate the associated logits, values, and logprobs needed for use with various aspects of the PPOTrainer, the fact that they are both effectively strings of tokens is almost irrelevant since any model input that can generate valid logits, logprobs, and values should work? _(for example if my 'query' and 'response' were simply the feature stream needed for model to generate those logits, logprobs and values?)_ - Also, the structure "train_stats" is present throughout the many functions. I am somewhat unfamiliar with W&B but is this structure there purely for a logging purpose or does it have a greater role in the actual functionality of the trainer?
7
lvwerra
2020-08-23T18:51:32
Thanks for clarifying what you are trying to achieve. Answering your first question takes a little bit of explanation as the devil is in the details. So there are a few things to note: 1. The `PPOTrainer` is designed to *fine-tune* a model rather than training it from scratch. Therefore, it also requires a reference model and the KL-divergence between the two models is used as an additional reward signal. Are you also using a pretrained model and just want to fine-tune it? You could of course set the KL-divergence factor to zero and thus ignore it but I have never attempted it and am not sure how well it works. 2. Since GPT-2 is an autoregressive model, it already generates an output for each query token plus the actual response tokens. I suspect this would be similar in your transformer architecture. The `PPOTrainer` uses the query length to determine which output logits and logprobs to ignore in the optimisation step. In your case you can probably use all of the decoder outputs and just need the features in the encoder step. Just keep that in mind. 3. The `PPOTrainer` concatenates the query and response tensors (since both are just token ids) and uses them as model input for the forward pass. This step is needed to have differentiable outputs. Since you have multimodal query/tensors and a encoder/decoder architecture you might need to adapt this slightly. The relevant code is [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L128) and the following `batched_forward_pass`. I think it should not be too hard to adapt this for your architecture. 4. That said, your statement is right: you should be able to use the `PPOTrainer` as long as the model generates valid logits, logprobs and values from your query/response pairs. The `PPOTrainer` expects the HuggingFace `transformers` format of the model outputs. Finally, as for the `train_stats` object, you are right that this is a strictly passive component that gathers various statistics in a dictionary that can then be used to log them via W&B (which I strongly recommend to track the training progress). If you want to log or plot some information about the training progress yourself have a look at its entries. See a W&B report of a TRL training [here](https://app.wandb.ai/lvwerra/trl-showcase/reports/Transformer-Reinforcement-Learning--VmlldzoxMDY4MDI). It is super easy to setup and helped me a lot debugging the library when I developed it. I hope this helps. Let me know if you have any more questions.
7
jpanaro
2020-08-23T19:35:34
1. Completely understand. In my first project I used REINFORCE to fine-tune a seq2seq model that had been pretrained on the same dataset using cross-entropy loss so the plan is to do the same thing here but with a Transformer instead of a seq2seq model and using PPOTrainer instead of the code I wrote for REINFORCE (it was heavily based on the work done in the paper. [here](https://arxiv.org/pdf/1612.00563.pdf) if you are interested in taking a look). I am definitely going to integrate KL-divergence using the cross-entropy model as the reference model as it seems pretty critical to the success of the fine-tuning. 2. When you say "determine which output logits and logprobs to ignore" are you referring to the modification of logprobs and vpred found [here](https://github.com/lvwerra/trl/blob/1662d78b5c5e688823b06c69495632abd68b7484/trl/ppo.py#L227)? 3. I agree, this should just be a matter of insuring the dimensions all match up prior to making a pass on the model. 4. I think I should be able to manually format my model output to the HuggingFace format seeing as I have all of the same information, but stored in a different way initially. In the past I have "manually" stored and processed my data and statistics using various helper scripts which quickly turns into a massive pain and bloats a lot of my files with excess "tracking" code. W&B seems like a cool alternative and I am running through the tutorial now, thanks for the suggestion! Your help is invaluable, thank you a ton for the assistance so far!
7
lvwerra
2020-08-24T19:05:52
> When you say "determine which output logits and logprobs to ignore" are you referring to the modification of logprobs and vpred found here? Yes, exactly. In my case the task is text continuation from the query. When calculating the logprobs etc. the model makes predictions for each query and response token. The predictions on the query part, however, are not relevant. I think in your case this is not a problem since all the generation is new. Indeed, W&B is great for exactly that. If you add the appropriate lines to your code all the metrics are logged all the times along with the relevant parameters and even code. Let me know if you have any further questions!
7
lvwerra
2020-09-06T14:57:03
I close this issue for now. If you have any more questions, just let me know. In any case if you publish your work I would very much like to read it!
7
lvwerra
2020-07-17T09:59:23
Thanks for pointing this out. I made a hard dependency to the transformers (2.6.0) library to avoid such issues (see commit 1662d78b5c5e688823b06c69495632abd68b7484). I will upgrade the library to transformers 3.0 at a later time. In the meantime, feel free to create a PR if you have time to test if the library works under transformers 3.0. To upgrade the library use `pip install trl==0.0.2`.
6
ashokchhetri7
2024-03-05T06:49:42
So, in the previous code, the top_k_top_p_filtering was imported from the transformers.generation_utils. I changed underscore to . as shown in following and it solved my problem: `from transformers.generation.utils import top_k_top_p_filtering`
6
Aryan-Deshpande
2024-03-08T21:04:50
![image](https://github.com/huggingface/trl/assets/72693780/e9065970-0eaa-46a7-90b7-d7a397b0d20c) After directly installing transformer from its repository. and using latest version of trl, im still encountering this error
6
polarbeargo
2024-03-08T23:16:26
Same here I encountered the same error as @Aryan-Deshpande this week. Last week can import SFTTrainer without this error.
6
Aryan-Deshpande
2024-03-09T07:41:54
I might have found the issue with this, we have to import TrainerArguments from transformers instead. 'from transformers import TrainerArguments' please correct me if i am wrong.
6
polarbeargo
2024-03-09T12:51:42
In my case, My original work already separated import SFTTrainer and TrainerArguments but still burst the same error ![git](https://github.com/huggingface/trl/assets/8589224/3331df79-6d07-4c13-aab0-06ec4b8ad039)
6
dangl00
2024-03-09T14:41:00
Make sure that you have transformers version 4.38.2 as "top_k_top_p_filtering" is removed in the next version. Then, as previously mentioned by @ashokchhetri7 , importing it from"transformers.generation.utils" should work.
6
jstephencorey
2024-03-22T18:19:01
Changing the transformer version to 4.38.2 did work for me, as well.
6
elyhahami18
2024-04-29T06:10:41
pip install transformers==4.38.2 worked for me as well
6
aseef2289
2024-05-16T03:29:21
> pip install transformers==4.38.2 worked for me as well ![image](https://github.com/huggingface/trl/assets/100977702/dfe7e5d8-20fb-46e8-a057-880f40637feb) ![image](https://github.com/huggingface/trl/assets/100977702/694705cf-0137-40d2-821f-9987fda56539) I'm still getting the import error even with transformers==4.38.2 Any idea what could be wrong here?
6
lvwerra
2020-06-09T20:19:47
Hi @seekingpeace! Thanks for the PR. The readme is autogenerated by nbdev from the index.ipynb. Therefore, the formatting should be changed in the nbdev library. If you get in a PR in there I am happy to rerun the generation on my end.
5
lvwerra
2020-05-17T12:17:16
Hi @deepanwayx, thanks for your interest in the library. Lets see if I can answer your question: ## 1. Calculation of KL-divergence I think both of your questions here can be answered by looking at the equation for the KL divergence: <img src="https://render.githubusercontent.com/render/math?math=KL(p,q) = \mathbb{E}_{p(x)} [\log \frac{p(x)}{q(x)}]=\mathbb{E}_{p(x)} [\log p(x) - \log q(x)]"> which can be approximated for discrete values by the following formula: <img src="https://render.githubusercontent.com/render/math?math=KL(p,q) = \sum p(x) [\log \frac{p(x)}{q(x)}]= \sum p(x) [\log p(x) - \log q(x)]"> This is a weighted sum of the term in the first equation. Each term is weighted by the probability p(x). Since we sample the tokens from p(x) we already took that into account implicitly. Tokens that are unlikely are rarely selected while tokens with high probability are selected more often. If we average over all elements in the sequence we achieve the same weighting as by weighting each possible tokens with its probability. In that case the step you propose would be redundant. ## 2. About the ratio One important detail to mention here is that the PPO optimisation runs for several steps for every batch of data. For this reason the model changes after each optimisation step. Therefore, `old_logprobs` stays the same while `logprobs` change after each optimisation. Now, the `ratio` is an essential part of the PPO algorithm. The idea is that after the optimisation step you calculate the `ratio` to see if the chosen action gets a higher or lower probability than during rollout. That value multiplied with the advantage yields the unclipped objective function (that is used in TRPO). The idea is that you want to increase the policy's probability of the actions with a high advantage and vice versa. PPO uses a clipped version of this objective for better stability. For more detail I highly recommend the excellent [original paper](https://arxiv.org/pdf/1707.06347.pdf)! I haven't thought about the effects of Dropout. I suspect the effect of the optimised model are larger than the fluctuations from dropout. But feel free to experiment with it and create a PR if it yields training improvement. ## Remarks Finally, I want to mention that my main contribution in this project was translating OpenAI's TensorFlow code to PyTorch and making it compatible with the Hugging Face library. The details above were already implemented in the original code and these are merely my explanations. See the original [code](https://github.com/openai/lm-human-preferences/) and [paper](https://arxiv.org/pdf/1909.08593.pdf) for more details. For the PPO implementation check out the [train_policy.py](https://github.com/openai/lm-human-preferences/blob/master/lm_human_preferences/train_policy.py) script.
4
deepanwayx
2020-05-17T13:39:27
Thanks for your detailed explanations. I think it makes a lot more sense now. I will check out the original PPO paper for more details.
4
yanghoonkim
2021-05-26T01:35:36
Hi @lvwerra About the difference between `logprobs` and `old_logprobs`: You mentioned in the #10 that > So the reason we run PPO for 4 epochs is that we want to make most of the data we gathered. If generating samples was cheap we could only train for one epoch. and in this issue, you said that `logprobs` and `old_logprobs` will different after one epoch, which means that i can't set the `ppo_epoch` to 1 Quite confused about that.
4
lvwerra
2021-08-09T07:57:18
You can set `ppo_epoch` to 1 and only the `logprobs` will change, which makes sense since you the model changes after each `ppo_epoch`, this the predictions are not the same. Why would that be a problem?
4
JoaoLages
2023-01-12T18:12:48
> You can set `ppo_epoch` to 1 and only the `logprobs` will change, which makes sense since you the model changes after each `ppo_epoch`, this the predictions are not the same. Why would that be a problem? In the first epoch `log_probs` is the same as `old_logprobs` (if we disregard the dropout effect) so I think that @yanghoonkim's comment makes sense, right? I.e., if `ratio` is essential as you pointed, `ppo_epoch` must be bigger than 1 for `ratio` to ever be different than 1.
4
lvwerra
2020-04-01T09:48:12
Hi @trisongz Glad you find the library useful. Let's see if I understand your objective correctly: - You have a dataset with protein sequences and you would like GPT-2 to generate realistic sequences. - You have trained BERT to classify whether two subsequences are compatible. Now your question is how to setup the PPO training step? Before running PPO I would fine-tune (or train from scratch) GPT-2 on your dataset with the language modeling objective. Check out the [training script](https://github.com/huggingface/transformers/blob/master/examples/run_language_modeling.py) from Hugging Face. Then I would probably start by using the first subsequence (18 characters) as the query and then let GPT-2 respond for 18 characters. Although GPT-2 uses BPE encodings and not character level encodings, so the actual number might differ. Then I would pass query/response pairs to BERT for the prediction and use its output as a reward (I used the unnormalised logits, but you can also try to use the class predictions 0/1). Regarding the other PPO parameters I didn't change them much from the original implementation except the batch size for memory reason. I would start there and adjust them later if it does not work. You also want to keep an eye on the KL-divergence (logged as objective/kl) to make sure the output distribution stays close to your initial data distribution.
1
trisongz
2020-04-01T16:53:05
Hi @lvwerra thanks for the advice! Yes, you're correct. I had actually started training GPT-2 from scratch with a custom tokenizer on the dataset prior to seeing this comment so I'm glad I am on the right track. I also switched over to using RoBERTa as the classifier to test as well, which is currently at ``` ‘mcc’: 0.9997714069569512, ‘tp’: 308736, ‘tn’: 164108, ‘fp’: 49, ‘fn’: 0 ‘acc’: 0.9998963824797575 ‘eval_loss’: 0.00044921892891853013 ``` after 50k steps, although I am concerned that's a potential result of me not shuffling the csv data prior to training the model, as I wrote the csv file sequentially from the raw dataset. Is there a way you suggest to easily shuffle the csv file prior to the training step? I used your extremely helpful train_test_split function for eval and train data. For this specific task, since it is sequence based, do you think a Masked LM would perform better at generation than GPT-2 since unlike human written text, there's likely sequence pairs that repeat? So far what I currently have **BERT/RoBERTa Classifier:** _Dataset structure_ GTGG ACCA TATG GCCA, ACCA TATG GCCA TAAT, 1 ATCA GGAA GGCA AGAG, AAGT ACAC ATCA GGAA, 0 ``` ------------------------------ The Predictions below should result in 1 GTGG ACCA TATG GCCA -> ACCA TATG GCCA TAAT: [1] GCCA TAAT CAAA AAGT -> TAAT CAAA AAGT ACAC: [1] ------------------------------ The Predictions below should result in 0 ATCA GGAA GGCA AGAG -> AAGT ACAC ATCA GGAA: [0] CAAA AAGT ACAC ATCA -> GCCA TAAT CAAA AAGT: [0] ``` **For GPT-2 LM:** _Single line by line text file of only true (1) sequences_ GTGG ACCA TATG GCCA ACCA TATG GCCA TAAT GCCA TAAT CAAA AAGT TAAT CAAA AAGT ACAC Does this look correct so far? Thank you for the tips!
1
lvwerra
2020-04-01T17:04:57
> `‘acc’: 0.9998` That seems suspiciously high. either your task is trivial or there is some leakage in your dataset. Maybe entries exist more than once and are therefore in both train and test split. `train_test_split` should shuffle the dataset already. I would definitely investigate that further. I have not much experience with such sequences so I don't know if MLM would work better. Also if training GPT-2 from scratch makes sense probably depends the size of the dataset and resources you have available. I guess you could try the simple, pretrained approach and if that does not work out consider moving to MLM or training GPT-2 from scratch. For the GPT-2 LM that looks fine to me. You could also consider adding the EOS token at the end of each line (see [here](https://huggingface.co/lvwerra/gpt2-imdb) for a snippet how I processed the IMDB dataset). Good luck.
1
trisongz
2020-04-02T05:42:11
I'm at part 4 now where I'm running the RL environment, and looking through your comments. I also updated GPT-2 to train with a EOS token. I messed up a few things originally, but I think I'm on the right track. Since I created a custom tokenizer for GPT-2, each sequence of 4 letters is 1 token for I/o. Currently I have my txt_in_len as well as txt_out_len set to 4, to match what BERT expects to see for sequence pair classification. However, I realized after the scores returned that I hadn't updated the reward mechanism to 0/1 so the scores are a mess. (This is prior to updating the txt lengths properly to 4x4). ![image](https://user-images.githubusercontent.com/4735784/78214258-5dfe4b00-747a-11ea-8174-ac8dac282c22.png) Could you point me to how I would be able to switch up the rewards based on the Classifier output? I was looking around here: ``` def compute_rewards(self, scores, logprobs, ref_logprobs): """Compute per token rewards from scores and KL-penalty.""" kl = logprobs - ref_logprobs non_score_reward = -self.kl_ctl.value * kl rewards = non_score_reward.clone().detach() rewards[:, -1] += scores return rewards, non_score_reward, self.kl_ctl.value ``` But wasn't entirely sure
1
lvwerra
2020-04-02T09:45:14
You should normalise the scores before running the `PPTrainer.step`. The outputs you get from the BERT model are logits. So you would need to apply Softmax to the outputs and then find the max probability ```Python probs = F.softmax(bert_outputs, dim=-1) max_id = torch.argmax(probs, dim=-1) ``` `max_id` corresponds to the output index with the largest probability. If position 0 in your outputs corresponds to "not entailed" and position 1 to "entailed" that should be what you are looking for.
1
trisongz
2020-04-02T17:14:55
I'm still relatively new to Torch, so I apologize for silly questions. Would it be here that you add that step before appending it to rewards? ``` #### tokenize text for sentiment analysis t = time.time() texts = [q + r for q,r in zip(game_data['query'], game_data['response'])] sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device) timing['time/build_input_sentiment'] = time.time()-t #### get sentiment score t = time.time() rewards = [] for i in range(int(config['batch_size']/fbs)): res = sentiment_model.forward(sentiment_inputs[i*fbs:(i+1)*fbs], attention_masks[i*fbs:(i+1)*fbs])[0][:, 1].detach() probs = F.softmax(res, dim=-1) max_id = torch.argmax(probs, dim=-1) rewards.append(max_id) #rewards.append(res) rewards = torch.cat(rewards) timing['time/get_sentiment_preds'] = time.time()-t #### Run PPO training t = time.time() stats = ppo_trainer.step(query_tensors, response_tensors, rewards) timing['time/optimization'] = time.time()-t ```
1
lvwerra
2020-04-03T09:49:59
That looks about right. You need to remove the logit slicing in line: ```Python res = sentiment_model.forward(sentiment_inputs[i*fbs:(i+1)*fbs], attention_masks[i*fbs:(i+1)*fbs])[0].detach() ``` Since `[:, 1]` slices out the logits for the positive sentiment in my example. Since you want to create discrete rewards you will need both positive and negative logits for the softmax layer.
1
trisongz
2020-04-03T19:53:07
I think I'm getting closer. I had to do one additional step and transform max_id to max_id.float(). However, the outputs are all showing rewards as 0.0 so far - wanted to confirm. Result of res step ``` [ 5.5575, -6.0447], [ 5.5397, -6.0370], [ 5.5577, -6.0430], [ 5.5556, -6.0427], [ 5.5585, -6.0432], [ 5.5494, -6.0396], [ 5.5576, -6.0438], [ 5.5544, -6.0420], [ 5.5584, -6.0439], [ 5.5490, -6.0390], [ 5.5601, -6.0438], [ 5.5527, -6.0437], [ 5.5541, -6.0416], [ 5.5583, -6.0435], [ 5.5514, -6.0416], [ 5.5590, -6.0440], [ 5.5556, -6.0430], [ 5.5468, -6.0402], [ 5.5564, -6.0439], [ 5.5545, -6.0405], [ 5.5537, -6.0446], [ 5.5563, -6.0434], [ 5.5566, -6.0431], [ 5.5564, -6.0429], [ 5.5527, -6.0419], [ 5.5535, -6.0425], [ 5.5531, -6.0433], [ 5.5546, -6.0427], [ 5.5518, -6.0417], [ 5.5573, -6.0431], [ 5.5567, -6.0428]], device='cuda:0') ``` result of probs ``` tensor([[9.9999e-01, 9.2180e-06], [9.9999e-01, 9.1457e-06], [9.9999e-01, 9.3818e-06], [9.9999e-01, 9.1595e-06], [9.9999e-01, 9.1816e-06], [9.9999e-01, 9.1508e-06], [9.9999e-01, 9.2678e-06], [9.9999e-01, 9.1529e-06], [9.9999e-01, 9.1989e-06], [9.9999e-01, 9.1447e-06], [9.9999e-01, 9.2768e-06], [9.9999e-01, 9.1304e-06], [9.9999e-01, 9.1988e-06], [9.9999e-01, 9.2063e-06], [9.9999e-01, 9.1496e-06], [9.9999e-01, 9.2305e-06], [9.9999e-01, 9.1391e-06], [9.9999e-01, 9.1788e-06], [9.9999e-01, 9.2861e-06], [9.9999e-01, 9.1639e-06], [9.9999e-01, 9.2114e-06], [9.9999e-01, 9.1817e-06], [9.9999e-01, 9.1681e-06], [9.9999e-01, 9.1684e-06], [9.9999e-01, 9.1717e-06], [9.9999e-01, 9.2159e-06], [9.9999e-01, 9.2032e-06], [9.9999e-01, 9.1985e-06], [9.9999e-01, 9.1905e-06], [9.9999e-01, 9.2262e-06], [9.9999e-01, 9.1625e-06], [9.9999e-01, 9.1699e-06]], device='cuda:0') ``` result of max_id (non-float) ``` tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], device='cuda:0') ``` result of max_id.float() ``` tensor([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.], device='cuda:0') ``` As a sanity check, I ran the post-training step to see the results, with modifying the rewards to match with the above. ``` #### sentiment analysis of query/response pairs before/after texts = [q + r for q,r in zip(game_data['query'], game_data['response (before)'])] sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device) #rewards = sentiment_model.forward(sentiment_inputs, attention_masks)[0][:, 1].detach() res = sentiment_model.forward(sentiment_inputs, attention_masks)[0].detach() probs = F.softmax(res, dim=-1) max_id = torch.argmax(probs, dim=-1) max_id = max_id.float() rewards = max_id game_data['rewards (before)'] = rewards.cpu().numpy() texts = [q + r for q,r in zip(game_data['query'], game_data['response (after)'])] sentiment_inputs, attention_masks = build_bert_batch_from_txt(texts, sentiment_tokenizer, device) #rewards = sentiment_model.forward(sentiment_inputs, attention_masks)[0][:, 1].detach() res = sentiment_model.forward(sentiment_inputs, attention_masks)[0].detach() probs = F.softmax(res, dim=-1) max_id = torch.argmax(probs, dim=-1) max_id = max_id.float() rewards = max_id game_data['rewards (after)'] = rewards.cpu().numpy() ``` ![image](https://user-images.githubusercontent.com/4735784/78399408-72e1f800-75ba-11ea-801c-ca87f5dc5a0b.png) Does this look right to you so far? I'm also not sure whether the classifier is issuing 0 as a result of not seeing all 8 tokens, as it's trained on 4/4 sequence pairs. When I run ``` text_a = 'AGAC CACT GTGG ACCA' text_b = 'CACT GTGG ACCA TATG' output = sentiment_model.forward(sentiment_tokenizer.encode([text_a, text_b], return_tensors="pt")) output output[0][0, 1] ``` I get `tensor(0.2771, grad_fn=<SelectBackward>)` Whereas with ``` text = 'CACT GTGG ACCA TATG' output = sentiment_model.forward(sentiment_tokenizer.encode(text, return_tensors="pt")) output output[0][0, 1] ``` It shows `tensor(-6.0448, grad_fn=<SelectBackward>)`
1
lvwerra
2020-04-04T10:23:47
Indeed, it seems like the LM is not generating good sequences at the beginning. There are several things you could try: - Further fine-tune GPT2 on the language modeling task - Play with the language generation (e.g. try changing the sampling temperature) - Use the logits as reward function (like in my example), since they provide a continuous reward signal. In your case it only ever gets a reward when the probability for 1 is larger than that for 0. If you take the raw logits it gets a reward even if it's only getting closer. - Try to simplify the task by reducing the number of generated characters. Maybe try 12 query characters vs. 4 response characters. These are just some ideas off the top of my head. I am sure there could be other problems and solutions.
1
lvwerra
2020-04-27T14:55:29
I am closing this issue for now. If you have further questions just contact me.
1