user
stringlengths 3
28
| created_at
timestamp[us] | body
stringlengths 1
173k
| issue_number
int64 1
2.54k
|
---|---|---|---|
asparius | 2024-12-14T00:28:52 | It utilizes `self.model`, which is defined in [[this line](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162)](https://github.com/huggingface/trl/blob/6d4ed070f1f53a87fb3cff2eb82a56db093bccc6/trl/trainer/rloo_trainer.py#L162). This approach is also adopted in `PPOTrainer`. I believe this is a deliberate nomenclature choice, designed to remain consistent across various preference learning frameworks without introducing the complexity of aligning with the diverse terminologies used in academic papers. | 2,472 |
qgallouedec | 2024-12-13T16:33:05 | Yes, that's a good point!
All datasets in [hf.co/trl-lib](https://huggingface.co/trl-lib) are taken from an original dataset. We should at least indicate this dataset in the readme with something like:
```
This dataset is a processed version of [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) with this [script](https://github.com/huggingface/trl/blob/main/examples/datasets/ultrafeedback.py).
```
To do this, we should add to all script in https://github.com/huggingface/trl/blob/main/examples/datasets a model card that we push, like in https://github.com/huggingface/trl/blob/179ba5367181d9bd4bdaec70d50789b09754d04a/scripts/generate_tiny_models.py#L69-L97
We could also add the type/format of dataset with a link to the relevant section in this page of the documentation: https://huggingface.co/docs/trl/en/dataset_formats | 2,470 |
qgallouedec | 2024-12-13T16:44:51 | What you're describing sounds closer to _padding-free_ than packing. We have a (currently draft) PR for this: #2437.
Can you confirm that's it is what you're describing?
---
At this point I'm not even sure that packing for DPO makes sense. How to ensure that you've as many chosen than rejected? How to ensure they match? How to handle partial sequences? | 2,469 |
zhc7 | 2024-12-13T17:16:15 | Hi, thank you for your response. I looked into the link you provided. I think we are talking about the same thing. I used the word "packing" from https://huggingface.co/blog/packing-with-FA2. The "packing" here actually means concatenating a fixed batch size of samples into one sequence, and use `position_ids` to mark the boundaries, rather than packing to a fixed length. So there won't be the problems you mentioned. I've also briefly read https://huggingface.co/blog/mayank-mishra/padding-free-transformer this blog, I think the ideas are the same. But I'm not sure how the latter is implemented. Maybe they are the same thing just with different names:)
I breifly went through the pr, I see it is trying to add `position_ids` in the whole process, so I guess we are talking about the same thing. | 2,469 |
qgallouedec | 2024-12-13T16:51:33 | That's a good point! Feel free to open a PR to fix this. I don't think adding a unittest for this is relevant. If possible, add plots (eg, with wandb) before/after to ensure that we aren't introducing a regression | 2,468 |
zhc7 | 2024-12-13T17:17:59 | Ofcourse!
![image](https://github.com/user-attachments/assets/2da93fdf-a29d-41a1-974a-2b640e3a6ee6)
here's a graph for the same training with and without the modification. You can see the pink line is a lot more smoother. Especially the accuracy graph. My `per_device_batch_size` is 2 so the accuracy per device can only be 1, 0.5 or 0. | 2,468 |
qgallouedec | 2024-12-13T17:34:35 | Perfect! | 2,468 |
HuggingFaceDocBuilderDev | 2024-12-12T14:04:44 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2467). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,467 |
qgallouedec | 2024-12-13T17:52:47 | That's very interesting! It would be a nice improvement.
If you want to tackle this problem, you should be aware that packing will be implemented differently (in a simpler way) in the near future, see #2405. You should branch from there. | 2,466 |
qgallouedec | 2024-12-12T10:44:50 | First, SAC is designed for continuous action spaces, whereas NLP tasks involve discrete token outputs. (A discrete variant of SAC exists though.)
Second, SAC lacks a mechanism to constrain the policy from deviating too far from the initial model. PPO, on the other hand, explicitly limits policy updates, which is crucial in RLHF to maintain alignment and preserve the pretrained model’s capabilities. Without such constraints, SAC could result in a policy that either drifts excessively or remains overly similar to the original model.
Finally, SAC's entropy maximization encourages broad exploration, which may be counterproductive in RLHF. Finetuning typically involves domain-specific data, and excessive exploration could lead to unaligned or undesirable behaviors. This mechanism might inadvertently encourage unlearning of pretrained knowledge.
That said, these points are speculative and based on intuition. I'd be eager to see papers or results that either confirm or challenge these hypotheses.
| 2,465 |
AMindToThink | 2024-12-13T21:11:39 | Thank you for the response.
The [RLOO trainer](https://arxiv.org/pdf/2402.14740) also lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy. [It turns out](https://huggingface.co/blog/putting_rl_back_in_rlhf_with_rloo) that for RLHF on pretrained language models, that clipping step is not necessary.
If you are referring to the reference policy, I don't see why a KL divergence term with a reference policy cannot be included into the SAC loss function.
Mode collapse and loss of variety is a common problem for aligned models, so if SAC makes a different tradeoff, encouraging exploration, then that could be useful. | 2,465 |
qgallouedec | 2024-12-13T22:13:21 | > lacks PPO's clipping mechanism that constrains the policy from deviating too far from the previous policy
There is a KL term though
> I don't see why a KL divergence term with a reference policy cannot be included into the SAC loss function.
I guess you can, it's just that in its classic formulation, the SAC objective doesn't contain such a term.
> encouraging exploration, then that could be useful
I'm not so sure about that. But if you manage to produce or find results that help to see more clearly on this matter, please share them.
| 2,465 |
HuggingFaceDocBuilderDev | 2024-12-11T20:08:30 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2463). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,463 |
qgallouedec | 2024-12-12T11:25:31 | Thanks @kashif! | 2,463 |
qgallouedec | 2024-12-11T15:50:18 | Looks good, feel free to mark it ready for review when it's ready :) | 2,462 |
yaricom | 2024-12-12T17:24:11 | @qgallouedec Hi, Quentin! I can see that there are some trainer implementations that do logging of tabular data as `wandb.Table` using `Trainer.log()` method rather than using corresponding method of WandB API.
For example:
```Python
class DPOTrainer(Trainer):
......
def evaluation_loop(...)
.....
self.log(
{
"game_log": wandb.Table(
columns=["Prompt", "Policy", "Ref Model"],
rows=[
[prompt, pol[len(prompt) :], ref[len(prompt) :]]
for prompt, pol, ref in zip(
random_batch["prompt"], policy_output_decoded, ref_output_decoded
)
],
)
}
)
```
We are not sure on the best way to update this part of code in order to support other integrations like Comet, for example.
What do you think if I change the mentioned code block to log table using WandB API instead of `Trainer.log()`?
Something like:
```Python
if "wandb" in self.args.report_to:
import wandb
if wandb.run is not None:
wandb.log({"game_log": wandb.Table(
columns=["Prompt", "Policy", "Ref Model"],
rows=[
[prompt, pol[len(prompt):], ref[len(prompt):]]
for prompt, pol, ref in zip(
random_batch["prompt"], policy_output_decoded, ref_output_decoded
)
],
)}
)
```
This will greatly simplify adding other integrations.
| 2,462 |
qgallouedec | 2024-12-12T20:50:16 | Hey thanks for working on this.
Actually, we need to remove all these logging part in favor of [LogCompletionsCallback](https://huggingface.co/docs/trl/callbacks#trl.LogCompletionsCallback)
The best way is probably to make this callback compatible with comet | 2,462 |
yaricom | 2024-12-13T14:37:42 | @qgallouedec Thank you for quick response. I noticed that `LogCompletionsCallback` is a subclass of `WandbCallback`, which requires the `wandb` module to be present; otherwise, an exception is raised.
It seems a bit out of place to leave this inheritance unchanged and simply add Comet integration to this callback. There could be situations where Comet is installed, but WandB is either not installed or not initialized (e.g., missing API key).
It is possible to change `LogCompletionsCallback` inheritance to use `TrainerCallback` as superclass and then implement both integrations: wandb and Comet.
What do you think? | 2,462 |
qgallouedec | 2024-12-13T17:38:35 | > It is possible to change `LogCompletionsCallback` inheritance to use `TrainerCallback` as superclass and then implement both integrations: wandb and Comet.
>
> What do you think?
Yes, I think your suggestion makes sense. Would you like to make it as part of this PR? | 2,462 |
yaricom | 2024-12-13T17:46:47 | I think it would be better to have another PR for `LogCompletionsCallback` changes to keep things more granular. | 2,462 |
qgallouedec | 2024-12-13T18:10:36 | LGTM, waiting https://github.com/huggingface/trl/pull/2462#discussion_r1884306215 to be addressed then I'll approve & merge. Thanks! | 2,462 |
HuggingFaceDocBuilderDev | 2024-12-13T18:14:15 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2462). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,462 |
qgallouedec | 2024-12-11T15:04:05 | ☄️ | 2,461 |
qgallouedec | 2024-12-11T14:22:57 | The dataset is not loaded in RAM (only the current batch).
The training should be rather light in terms of RAM, as the weights and gradient are on the GPU. You'll still need enough RAM to load the model though.
When I run the experiment, I see that it requires less than 2GB of RAM.
| 2,460 |
Kallinteris-Andreas | 2024-12-11T14:30:09 | What CPU do you have (exact model)
and does DRAM usage explode when you run `CUDA_VISIBLE_DEVICES="" python test.py`
by current best guess is that it is a BF16 related issue (as my r5 4600h does not natively support it, probably off though) | 2,460 |
qgallouedec | 2024-12-11T14:45:57 | ```
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU @ 2.20GHz
stepping : 0
microcode : 0xffffffff
cpu MHz : 2199.998
cache size : 56320 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bhi
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU @ 2.20GHz
stepping : 0
microcode : 0xffffffff
cpu MHz : 2199.998
cache size : 56320 KB
physical id : 0
siblings : 2
core id : 0
cpu cores : 1
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap xsaveopt arat md_clear arch_capabilities
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs taa mmio_stale_data retbleed bhi
bogomips : 4399.99
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
```
> and does DRAM usage explode when you run CUDA_VISIBLE_DEVICES="" python test.py
you mean, trying to train on CPU?
btw you may want to set `max_seq_length` in in the `SFTConfig` to limit the GPU memory usage.
> by current best guess is that it is a BF16 related issue (as my r5 4600h does not natively support it, probably off though)
BF16 if off by default yes | 2,460 |
Kallinteris-Andreas | 2024-12-11T14:47:30 | GPU does not work for me (works for my other RL projects)
```sh
$ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True py test.py
[2024-12-11 16:34:26,123] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
No ROCm runtime is found, using ROCM_HOME='/opt/rocm'
0%| | 0/3 [00:00<?, ?it/s]Traceback (most recent call last):
File "/home/master-andreas/job/trl-test/test.py", line 16, in <module>
trainer.train()
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 2164, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 2522, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 3667, in training_step
loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/trainer.py", line 3721, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 1140, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 870, in forward
layer_outputs = decoder_layer(
^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 613, in forward
hidden_states = self.mlp(hidden_states)
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/transformers-kalli/src/transformers/models/qwen2/modeling_qwen2.py", line 223, in forward
return self.down_proj(self.act_fn(self.gate_proj(hidden_state)) * self.up_proj(hidden_state))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1736, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/modules/activation.py", line 432, in forward
return F.silu(input, inplace=self.inplace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/master-andreas/job/trl-test/test_env/lib/python3.12/site-packages/torch/nn/functional.py", line 2380, in silu
return torch._C._nn.silu(input)
^^^^^^^^^^^^^^^^^^^^^^^^
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 38.00 MiB. GPU 0 has a total capacity of 3.63 GiB of which 11.31 MiB is free. Including non-PyTorch memory, this process has 3.55 GiB memory in use. Of the allocated memory 3.47 GiB is allocated by PyTorch, and 8.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
0%| | 0/3 [00:00<?, ?it/s]
``` | 2,460 |
qgallouedec | 2024-12-11T14:59:11 | WDYM it doesn't work? It seems to work from the traceback.
I can see that your device only have 3.63 GiB, which is not enough to run the example. With `max_seq_length=128` you'll need around 12GB
![W B Chart 11_12_2024, 15_58_14](https://github.com/user-attachments/assets/63349049-b7f3-4c09-9229-b1c3f1914c90) | 2,460 |
Kallinteris-Andreas | 2024-12-11T15:27:51 | Here is the dram usage per value of `max_seq_lenght`
max_seq_length -> max RAM usage observed (rounded up)
4 -> 10GB DRAM
32 -> 9GB DRAM
128 -> 11GB DRAM
512 -> 18GB DRAM
1024 (default) -> 32GB+ DRAM
using `max_seq_length=128` seems to require 28 hours on my CPU which is an improvement from not running at all.
I am not sure what `max_seq_length` actually does, I am assuming it limits the context length used during fine-tuning the docstring mentions something about `ConstantLengthDataset` but I have not found it what it is.
| 2,460 |
qgallouedec | 2024-12-11T15:37:44 | > I am assuming it limits the context length used during fine-tuning
Yes, that's what it does.
> mentions something about ConstantLengthDataset but I have not found it what it is.
This is a special dataset setting where all data have the same length. Not relevant for this issue though | 2,460 |
Kallinteris-Andreas | 2024-12-12T01:29:36 | How much time does it take to run this simple example on your hardware?
```py
from trl import SFTConfig, SFTTrainer
from datasets import load_dataset
dataset = load_dataset("trl-lib/Capybara", split="train")
training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT", max_seq_length=128)
trainer = SFTTrainer(
args=training_args,
model="Qwen/Qwen2.5-0.5B",
train_dataset=dataset,
)
trainer.train()
``` | 2,460 |
Kallinteris-Andreas | 2024-12-16T10:23:40 | Closing, as it appears to be the natural requirement of SFT | 2,460 |
qgallouedec | 2024-12-13T22:16:30 | Thanks for this suggestion.
Can you quantify the speedup?
Any idea how to properly set the gradient checkpointing configurations?
Can we reproduce the speedup with a very simple code example?
| 2,459 |
qgallouedec | 2024-12-11T17:10:04 | Hey, thanks for contributing!
Is it really a loss type? It seems to me that it can be combined with any loss type, no?
What about having a new arg in `DPOConfig`? maybe `length_normalize`?
Also, I'd add a test for this | 2,458 |
HuggingFaceDocBuilderDev | 2024-12-10T16:14:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2457). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,457 |
qgallouedec | 2024-12-10T17:54:12 | This is an interesting finding! ~I suspect it's related to https://github.com/huggingface/trl/issues/2175~. I'm investigating. | 2,456 |
qgallouedec | 2024-12-10T18:54:04 | The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990).
To set the number of gradient accumulation steps, users can either:
1. Specify `num_steps` in `AcceleratorConfig`, or
2. Use `TrainingArguments.gradient_accumulation_steps` when initializing the `transformers.Trainer`.
However, in both cases, the gradient norm (`grad_norm`) is computed using the accelerator [here](https://github.com/huggingface/transformers/blame/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L2557). When using `TrainingArguments.gradient_accumulation_steps` to define the accumulation steps, the accelerator does not account for the specified value when calculating the gradient norm.
Adding a `gradient_accumulation_steps` argument to the `Accelerator` initialization [here](https://github.com/huggingface/transformers/blame/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L5043) resolves the issue (as shown in the curves below). However, I'm pretty sure it's not what we want to do.
```diff
- self.accelerator = Accelerator(**args)
+ self.accelerator = Accelerator(**args, gradient_accumulation_steps=self.args.gradient_accumulation_steps)
```
@muellerzr, could you review this and share your thoughts?
--
`--gradient_accumulation_steps 8 --per_device_train_batch_size 4`
![Screenshot 2024-12-10 at 19 50 52](https://github.com/user-attachments/assets/9f22e505-6394-4561-9f09-1f7d2df196ed)
`--gradient_accumulation_steps 32 --per_device_train_batch_size 1`
![Screenshot 2024-12-10 at 19 51 11](https://github.com/user-attachments/assets/198999ea-892d-4797-be36-6f200e01f18c)
Before the fix : red/pink ; after the fix blues
![Screenshot 2024-12-10 at 20 01 10](https://github.com/user-attachments/assets/957677a7-bb81-4d5d-8947-7ab0daa1e6e1)
| 2,456 |
muellerzr | 2024-12-11T02:52:48 | Correct, that's not what we want to do because with the fix to how we calculate the number of items in the batch, the losses will not align and things will be off, so we *don't* divide the loss by accumulation steps if we know that value. I'd need to play with this a bit as I'm not 100% sure if we can just modify the grads for clipping without modifying the overall loss we just calculated :thinking: | 2,456 |
AIR-hl | 2024-12-11T03:10:34 | > The issue arises from how the accelerator is configured in [`create_accelerator_and_postprocess`](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990).
@qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer`
sft, `batch_size=4`, `accumulation=8`
![7cf799b818cdced95fc4632de02a8fba](https://github.com/user-attachments/assets/35e77e32-544a-4e25-90d9-a3b2ba2b8525)
sft, `batch_size=2`, `accumulation=16`
![1eba3468eab71db9185de3a1ab0120b9](https://github.com/user-attachments/assets/2eadda34-61e4-4cf4-ba63-153d23d7bcd1)
sft, `batch_size=1`, `accumulation=32`
![c6e2266b5eb3ff8736fe652a85124a41](https://github.com/user-attachments/assets/02a34f43-7b98-4ac1-b2c3-c33cf6cb66a0)
| 2,456 |
qgallouedec | 2024-12-11T10:21:00 | > @qgallouedec I have a new question that if the problem arises from [create_accelerator_and_postprocess](https://github.com/huggingface/transformers/blob/91b8ab18b778ae9e2f8191866e018cd1dc7097be/src/transformers/trainer.py#L4990) in `transformers.Trainer`, why `trl.SFTTrainer`'s behavior is normal, but `trl.DPOTrainer` isnt, they both inherit from `transformers.Trainer`
I can't explain it right now. Any idea?
| 2,456 |
qgallouedec | 2024-12-11T10:41:00 | I may have found the solution: https://github.com/huggingface/transformers/pull/35207
Running some experiments... | 2,456 |
qgallouedec | 2024-12-11T11:12:53 | ## Does it solve the issue?
### Before the fix
same effective batch size (32)
- grad accumulation = 32 / batch_size = 1
- grad accumulation = 8 / batch_size = 4
![Screenshot 2024-12-11 at 12 04 50](https://github.com/user-attachments/assets/d4b7513b-23c3-427a-aed7-72614bf337d0)
We can see here that the grad_norm is different while it should be the same.
### After the fix
same effective batch size (32)
- grad accumulation = 32 / batch_size = 1
- grad accumulation = 8 / batch_size = 4
![Screenshot 2024-12-11 at 12 04 40](https://github.com/user-attachments/assets/40b10719-28a5-4cdc-b2ed-c39e9421b2d9)
Now the grad_norm is the same.
## Does it impact the results?
### Config 1
grad accumulation = 32 / batch_size = 1 (effective batch size = 32). Curves are _before the fix_ and _after the fix_
![Screenshot 2024-12-11 at 12 04 14](https://github.com/user-attachments/assets/81b7135a-921f-4275-8dd5-27cce38bb612)
The only value impacted is the grad_norm, no impact on loss
### Config 2
grad accumulation = 8 / batch_size = 4 (effective batch size = 32). Curves are _before the fix_ and _after the fix_
![Screenshot 2024-12-11 at 12 03 13](https://github.com/user-attachments/assets/6a26889f-7314-4b10-9919-16b31bc0c77a)
The only value impacted is the grad_norm, no impact on loss
| 2,456 |
AIR-hl | 2024-12-11T13:01:32 | @qgallouedec Thanks for ur work! So this bug actually only affects the reported logs and not the training results, right? :)
| 2,456 |
qgallouedec | 2024-12-11T13:03:18 | That's what the results suggest yes | 2,456 |
qgallouedec | 2024-12-11T13:25:54 | Leaving the issue open until https://github.com/huggingface/transformers/pull/35207 is properly merged | 2,456 |
August-murr | 2024-12-10T09:29:57 | @qgallouedec how's everything so far?
Is there anything you'd like me to change? | 2,455 |
qgallouedec | 2024-12-10T09:38:57 | Thanks @August-murr for this PR!
As mentioned in this [comment](https://github.com/huggingface/trl/issues/2429#issuecomment-2515244907), I think it would be better to start by only adding this feature to the functions of `trl/data_utils.py` and check that everything works as expected, without adding it to any trainer for the moment.
In fact, my idea is to gradually drop the functions from `trl/extras/dataset_formatting.py`. | 2,455 |
qgallouedec | 2024-12-10T11:34:47 | Looks good! We just need to update the docstrings of the functions and add some unittests | 2,455 |
August-murr | 2024-12-11T11:58:34 | I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct? | 2,455 |
qgallouedec | 2024-12-11T12:12:44 | > I'm assuming we should also integrate the functions from `data_utils.py` into all the trainers, correct?
Indeed, but we'll do that in follow-up PR. I think it's the best way to go | 2,455 |
HuggingFaceDocBuilderDev | 2024-12-11T12:21:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2455). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,455 |
August-murr | 2024-12-11T19:09:33 | @qgallouedec
let me know if there is anything else I need to do. | 2,455 |
qgallouedec | 2024-12-11T21:44:24 | Looks good to me! Just one munir comment | 2,455 |
qgallouedec | 2024-12-12T15:46:38 | ```python
from transformers import AutoProcessor
from trl import apply_chat_template
tokenizer = AutoProcessor.from_pretrained("trl-internal-testing/tiny-LlamaForCausalLM-3.2")
# Define dummy test tools
def get_current_temperature(location: str):
"""
Gets the temperature at a given location.
Args:
location: The location to get the temperature for
"""
return 22.0
# Define test case
test_case = {
"prompt": [
{"content": "Whats the temperature in London?", "role": "user"},
]
}
# Test with tools
result_with_tools = apply_chat_template(test_case, tokenizer, tools=[get_current_temperature])
print(result_with_tools["prompt"])
```
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Environment: ipython
Cutting Knowledge Date: December 2023
Today Date: 12 Dec 2024
<|eot_id|><|start_header_id|>user<|end_header_id|>
Given the following functions, please respond with a JSON for a function call with its proper arguments that best answers the given prompt.
Respond in the format {"name": function name, "parameters": dictionary of argument name and its value}.Do not use variables.
{
"type": "function",
"function": {
"name": "get_current_temperature",
"description": "Gets the temperature at a given location.",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The location to get the temperature for"
}
},
"required": [
"location"
]
}
}
}
Whats the temperature in London?<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
Nice! | 2,455 |
HuggingFaceDocBuilderDev | 2024-12-09T17:55:56 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2454). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,454 |
kashif | 2024-12-09T13:30:15 | @NINGBENZHE do you know where in the code the issue is occurring? | 2,453 |
NINGBENZHE | 2024-12-09T14:09:34 | > @NINGBENZHE do you know where in the code the issue is occurring?
I haven't found the issue yet, but after making the modifications, the critic's loss is functioning normally, and the optimizer's functionality has been restored. | 2,453 |
kashif | 2024-12-09T14:10:58 | ok let me try to pin-point the issue... and perhaps try to add a failing test? | 2,453 |
NINGBENZHE | 2024-12-09T14:23:17 | > ok let me try to pin-point the issue... and perhaps try to add a failing test?
You can repeat the same data and observe the critic's loss; it remains unchanged.
| 2,453 |
NINGBENZHE | 2024-12-09T14:24:38 | I found that the issue might have been introduced by this PR.
https://github.com/huggingface/trl/commit/16fa13ce728e537a91742571b0c4824fb3a98a30 | 2,453 |
HuggingFaceDocBuilderDev | 2024-12-09T14:44:23 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2453). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,453 |
kashif | 2024-12-09T14:47:50 | @NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting? | 2,453 |
NINGBENZHE | 2024-12-10T01:10:44 | > @NINGBENZHE can you kindly run `make precommit` in the root dir to fix the formatting?
I made the submission directly on the web, without using local Git, and only modified the parameter names, so it should not have introduced any new formatting issues. Can you force the merge? | 2,453 |
kashif | 2024-12-12T10:45:09 | closing as these changes have been merged into the PR #2463 | 2,453 |
asparius | 2024-12-10T14:08:59 | You have two gpus, but you only use it 1 in your accelerate config. You could also use deepspeed to further decrease the memory footprint. Lastly, keep per_device_train_batch_size as low as possible, instead increase gradient_accumulation step. | 2,452 |
gp-1108 | 2024-12-12T18:20:59 | Hi @asparius, thank you for the suggestions. As I am running this code on a computing cluster I am having some problems with [deepspeed](https://github.com/microsoft/DeepSpeed/issues/2772#issuecomment-2151669077). I would like to keep this issue open and get back once I have solved those issues | 2,452 |
qgallouedec | 2024-12-13T22:33:34 | It might come from your data. Do you have long sequences in your dataset?
It's very recommended to set these arguments: `max_length`, `max_prompt_length`, `max_completion_length` in the `DPOConfig`. Eg.
```python
DPOConfig(
...,
max_prompt_length=128,
max_completion_length=512,
)
``` | 2,452 |
asparius | 2024-12-14T00:46:01 | @gp-1108 I faced similar issues. I would recommend to check available modules in your cluster by a command like "module avail" and load a cuda installation by "module load", of course this is assuming you are in slurm env. If you dont have cuda in available modules, perhaps you could ask cluster admins to download it. I think you should be good after this. | 2,452 |
gp-1108 | 2024-12-16T00:52:13 | Hi all, I have finally fixed all of the CUDA issues with the computing cluster 😮💨.
However, I did not fix the original issue. I am still running OOM even after using two full A40s.
I have tweaked both the script and the accelerate config so I will leave them below (I hope everything is setup as it should be).
**TRL ENV:**
```
- Platform: Linux-6.12.1-1.el8.elrepo.x86_64-x86_64-with-glibc2.35
- Python version: 3.10.12
- PyTorch version: 2.5.1
- CUDA device(s): NVIDIA A40, NVIDIA A40
- Transformers version: 4.46.0
- Accelerate version: 1.2.1
- Accelerate config:
- compute_environment: LOCAL_MACHINE
- distributed_type: MULTI_GPU
- mixed_precision: no
- use_cpu: False
- debug: True
- num_processes: 2
- machine_rank: 0
- num_machines: 1
- gpu_ids: all
- rdzv_backend: static
- same_network: True
- main_training_function: main
- enable_cpu_affinity: False
- downcast_bf16: no
- tpu_use_cluster: False
- tpu_use_sudo: False
- tpu_env: []
- dynamo_config: {'dynamo_backend': 'INDUCTOR'}
```
**SCRIPT:**
```python
import argparse
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftConfig, PeftModel, LoraConfig
from trl import DPOConfig, DPOTrainer
import utils as ut
import torch
from accelerate import Accelerator
import os
os.environ['WANDB_DISABLED'] = 'true'
#import wandb
def print_memory_usage(description="Memory Usage"):
"""
Prints the current memory usage for all available GPU devices.
Args:
description (str): A short description for context.
"""
if torch.cuda.is_available():
print(f"{description}:")
for i in range(torch.cuda.device_count()):
device = f"cuda:{i}"
free_mem, total_mem = torch.cuda.mem_get_info(device)
used_mem = total_mem - free_mem
total_mem_mb = total_mem / 1024**2 # Convert to MB
free_mem_mb = free_mem / 1024**2 # Convert to MB
used_mem_mb = used_mem / 1024**2 # Convert to MB
print(f" Device: {device}")
print(f" Total Memory: {total_mem_mb:.2f} MB")
print(f" Used Memory: {used_mem_mb:.2f} MB")
print(f" Free Memory: {free_mem_mb:.2f} MB")
else:
print("CUDA is not available on this system.")
def main(args):
"""
wandb.init(
# set the wandb project where this run will be logged
project="my-awesome-project",
)
"""
accelerator = Accelerator(
mixed_precision="no",
gradient_accumulation_steps=args.gradient_acc,
)
print(args)
print_memory_usage(description="Before anything")
# Load dataset
print("Loading dataset...")
dataset = ut.load_dataset(args.dataset_path)
dataset = dataset.train_test_split(test_size=args.test_split)
# Load PEFT configuration
print(f"Loading PEFT model configuration from {args.peft_model_id}...")
config = PeftConfig.from_pretrained(args.peft_model_id)
# Configure quantization
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
llm_int8_threshold=6.0,
llm_int8_has_fp16_weight=False,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
bnb_4bit_quant_type="nf4",
)
# Load base model
print(f"Loading base model from {config.base_model_name_or_path}...")
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
trust_remote_code=True, # Hardcoded
torch_dtype=torch.bfloat16,
)
model.config.use_cache = False
model.enable_input_require_grads() # To avoid error https://github.com/huggingface/trl/issues/731
print_memory_usage(description="After model init")
# Load tokenizer
print(f"Loading tokenizer from {config.base_model_name_or_path}...")
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer.eos_token = "<|eot_id|>" # Hardcoded
tokenizer.pad_token = "<|finetune_right_pad_id|>" # Hardcoded
# Load PEFT model
print(f"Loading PEFT model from {args.peft_model_id}...")
model = PeftModel.from_pretrained(
model,
args.peft_model_id,
adapter_name="trainable",
is_trainable=True
)
model.load_adapter(args.peft_model_id, adapter_name="reference") # Hardcoded
print_memory_usage(description="After two adapters")
tokenizer.chat_template = None
# Configure training arguments
training_args = DPOConfig(
learning_rate=args.learning_rate,
beta=args.beta,
loss_type=args.loss_type,
use_weighting=args.use_weighting,
rpo_alpha=args.rpo_alpha,
output_dir=args.output_dir,
logging_steps=args.logging_steps,
model_adapter_name="trainable", # Hardcoded
ref_adapter_name="reference", # Hardcoded
per_device_train_batch_size=args.batch_size,
gradient_accumulation_steps=args.gradient_acc,
)
# Configure Lora
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head']
)
# Initialize DPO trainer
print("Initializing DPO trainer...")
dpo_trainer = DPOTrainer(
model=model,
args=training_args,
tokenizer=tokenizer,
train_dataset=dataset["train"],
eval_dataset=dataset["test"],
peft_config=peft_config,
)
# Prepare everything for training
model, tokenizer, train_dataset, eval_dataset = accelerator.prepare(
model, tokenizer, dataset["train"], dataset["test"]
)
# Train the model
print("Starting training...")
dpo_trainer.train()
print("Training complete.")
dpo_trainer.save_model()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Fine-tune a model using PEFT and DPOTrainer.")
parser.add_argument("--dataset_path", type=str, required=True, help="Path to the dataset file (JSONL).")
parser.add_argument("--test_split", type=float, default=0.15, help="Proportion of dataset to use for testing.")
parser.add_argument("--peft_model_id", type=str, required=True, help="Path to the PEFT model directory.")
parser.add_argument("--load_in_8bit", action="store_true", help="Enable 8-bit quantization.")
parser.add_argument("--output_dir", type=str, default="Llama31_DPO", help="Directory to save the trained model.")
parser.add_argument("--logging_steps", type=int, default=1, help="Number of steps for logging during training.")
parser.add_argument("--learning_rate", type=float, default=1e-6, help="Learning rate for the AdamW optimizer.")
parser.add_argument("--beta", type=float, default=0.1, help="Parameter controlling deviation from the reference model.")
parser.add_argument("--loss_type", type=str, default="sigmoid", help="Type of loss to use for training.")
parser.add_argument("--use_weighting", action="store_true", help="Enable weighting of the loss.")
parser.add_argument("--rpo_alpha", type=float, default=None, help="Alpha parameter for the RPO paper.")
parser.add_argument("--batch_size", type=int, default=1, help="Batch size for training per gpu.")
parser.add_argument("--gradient_acc", type=int, default=1, help="Gradient accumulation steps.")
args = parser.parse_args()
main(args)
```
The script crashes after being called with the following parameters:
```sh
accelerate launch --num_processes=2 --num_machines=1 --mixed_precision=no --dynamo_backend=inductor dpo_finetuning.py \
--dataset_path ../dataset_generation/data/dpo_dialogues.jsonl \
--peft_model_id ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800 \
--output_dir ./tmp \
--logging_steps 1 \
--load_in_8bit \
--batch_size 1 \
--gradient_acc 1
```
The full traceback is this: (sorry for the duplication, it is two processes)
```
The following values were not passed to `accelerate launch` and had defaults used instead:
More than one GPU was found, enabling multi-GPU training.
If this was unintended please pass in `--num_processes=1`.
To avoid this warning pass in values for each of the problematic parameters or run `accelerate config`.
[2024-12-16 01:33:02,163] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
[2024-12-16 01:33:02,165] [INFO] [real_accelerator.py:219:get_accelerator] Setting ds_accelerator to cuda (auto detect)
Warning: The cache directory for DeepSpeed Triton autotune, /home/girottopie/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Warning: The cache directory for DeepSpeed Triton autotune, /home/girottopie/.triton/autotune, appears to be on an NFS system. While this is generally acceptable, if you experience slowdowns or hanging when DeepSpeed exits, it is recommended to set the TRITON_CACHE_DIR environment variable to a non-NFS path.
Namespace(dataset_path='../dataset_generation/data/dpo_dialogues.jsonl', test_split=0.15, peft_model_id='../llama3.1_finetuning/output/llama3. 1_SFT_from_Base/checkpoint-800', load_in_8bit=True, output_dir='./tmp', logging_steps=1, learning_rate=1e-06, beta=0.1, loss_type='sigmoid', use_weighting=False, rpo_alpha=None, batch_size=1, gradient_acc=1)
Before anything:
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 268.38 MB
Free Memory: 45246.62 MB
Namespace(dataset_path='../dataset_generation/data/dpo_dialogues.jsonl', test_split=0.15, peft_model_id='../llama3.1_finetuning/output/llama3. 1_SFT_from_Base/checkpoint-800', load_in_8bit=True, output_dir='./tmp', logging_steps=1, learning_rate=1e-06, beta=0.1, loss_type='sigmoid', use_weighting=False, rpo_alpha=None, batch_size=1, gradient_acc=1)
Before anything:
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 533.69 MB
Free Memory: 44981.31 MB
Loading dataset...
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 533.69 MB
Free Memory: 44981.31 MB
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 533.69 MB
Free Memory: 44981.31 MB
Loading dataset...
Loading PEFT model configuration from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800...
Loading base model from meta-llama/Meta-Llama-3.1-8B...
Loading PEFT model configuration from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800...
Loading base model from meta-llama/Meta-Llama-3.1-8B...
`low_cpu_mem_usage` was None, now default to True since model is quantized.
`low_cpu_mem_usage` was None, now default to True since model is quantized.
^MLoading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]^MLoading checkpoint shards: 0%| | 0/4 [00:00<?, ?it/s]^MLoading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:11, 3.85s/it]^MLoading checkpoint shards: 25%|██▌ | 1/4 [00:03<00:11, 3.86s/it]^MLoading checkpoint shards: 50%|█████ | 2/4 [00:07<00:07, 3.70s/it]^MLoading checkpoint shards: 50%|█████ | 2/4 [00:07<00:07, 3.72s/it]^MLoading checkpoint shards: 75%|███████▌ | 3/4 [00:11<00:03, 3.66s/it]^MLoading checkpoint shards: 75%|███████▌ | 3/4 [00:11<00:03, 3.66s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 2.67s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 3.05s/it]
^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 2.67s/it]^MLoading checkpoint shards: 100%|██████████| 4/4 [00:12<00:00, 3.05s/it]
After model init:
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 6105.69 MB
Free Memory: 39409.31 MB
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 6105.69 MB
Free Memory: 39409.31 MB
Loading tokenizer from meta-llama/Meta-Llama-3.1-8B...
After model init:
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 6105.69 MB
Free Memory: 39409.31 MB
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 6105.69 MB
Free Memory: 39409.31 MB
Loading tokenizer from meta-llama/Meta-Llama-3.1-8B...
Loading PEFT model from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800...
Loading PEFT model from ../llama3.1_finetuning/output/llama3.1_SFT_from_Base/checkpoint-800...
After two adapters:
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 10387.69 MB
Free Memory: 35127.31 MB
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 6209.69 MB
Free Memory: 39305.31 MB
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
Initializing DPO trainer...
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/bnb.py:355: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors.
warnings.warn(
After two adapters:
Device: cuda:0
Total Memory: 45515.00 MB
Used Memory: 10397.69 MB
Free Memory: 35117.31 MB
Device: cuda:1
Total Memory: 45515.00 MB
Used Memory: 6209.69 MB
Free Memory: 39305.31 MB
Using the `WANDB_DISABLED` environment variable is deprecated and will be removed in v5. Use the --report_to flag to control the integrations used for logging result (for instance --report_to none).
Initializing DPO trainer...
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/bnb.py:355: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors.
warnings.warn(
// Here it just fills in tqdm bars so I will skip this bit
100%|██████████| 953/953 [00:00<00:00, 9120.09 examples/s]
^MApplying chat template to eval dataset: 0%| | 0/953 [00:00<?, ? examples/s]^MApplying chat template to eval dataset: 100%|██████████| 953/953 [00:00<00:00, 17233.72 examples/s]
....
Tokenizing eval dataset: 99%|█████████▉| 945/953 [00:05<00:00, 168.39 examples/s]^MTokenizing eval dataset: 100%|██████████| 953/953 [00: 05<00:00, 161.72 examples/s]
Starting training...
Starting training...
^M 0%| | 0/8100 [00:00<?, ?it/s][rank1]:W1216 01:34:45.892000 1621718 torch/_logging/_internal.py:1081] [0/0] Profiler function <class 'torch. autograd.profiler.record_function'> will be ignored
/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin None._SimpleCData. __new__. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/ C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page. html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
/usr/local/lib/python3.10/dist-packages/torch/_dynamo/variables/functions.py:725: UserWarning: Graph break due to unsupported builtin None._SimpleCData. __new__. This function is either a Python builtin (e.g. _warnings.warn) or a third-party C/C++ Python extension (perhaps created with pybind). If it is a Python builtin, please file an issue on GitHub so the PyTorch team can add support for it and see the next case for a workaround. If it is a third-party C/ C++ Python extension, please either wrap it into a PyTorch-understood custom operator (see https://pytorch.org/tutorials/advanced/custom_ops_landing_page. html for more details) or, if it is traceable, use torch.compiler.allow_in_graph.
torch._dynamo.utils.warn_once(msg)
[rank0]:[W1216 01:35:25.807899242 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
[rank1]:[W1216 01:35:32.085567519 reducer.cpp:1400] Warning: find_unused_parameters=True was specified in DDP constructor, but did not find any unused parameters in the forward pass. This flag results in an extra traversal of the autograd graph every iteration, which can adversely affect performance. If your model indeed never has any unused parameters in the forward pass, consider turning this flag off. Note that this warning may be a false positive if your model has flow control causing later iterations to have unused parameters. (function operator())
Could not estimate the number of tokens of the input, floating-point operations will not be computed
...
// Skipping here some loss metrics prompted on the first 3 samples
...
^M 0%| | 3/8100 [01:29<46:45:49, 20.79s/it]^M 0%| | 4/8100 [01:31<36:56:14, 16.42s/it][rank1]: Traceback (most recent call last):
[rank1]: File "/nfsd/nldei/girottopie/NLP_DPO-Finetuning/llama3.1_dpo/dpo_finetuning.py", line 175, in <module>
[rank1]: main(args)
[rank1]: File "/nfsd/nldei/girottopie/NLP_DPO-Finetuning/llama3.1_dpo/dpo_finetuning.py", line 154, in main
[rank1]: dpo_trainer.train()
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2122, in train
[rank1]: return inner_training_loop(
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 2474, in _inner_training_loop
[rank1]: tr_loss_step = self.training_step(model, inputs, num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3572, in training_step
[rank1]: loss = self.compute_loss(model, inputs, num_items_in_batch=num_items_in_batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1371, in compute_loss
[rank1]: loss, metrics = self.get_batch_loss_metrics(model, inputs, train_eval="train")
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1323, in get_batch_loss_metrics
[rank1]: model_output = self.concatenated_forward(model, batch)
[rank1]: File "/usr/local/lib/python3.10/dist-packages/trl/trainer/dpo_trainer.py", line 1274, in concatenated_forward
[rank1]: per_token_logps = torch.gather(logits.log_softmax(-1), dim=2, index=labels.unsqueeze(2)).squeeze(2)
[rank1]: torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.54 GiB. GPU 1 has a total capacity of 44.45 GiB of which 1.46 GiB is free. Including non-PyTorch memory, this process has 42.72 GiB memory in use. Process 1621717 has 260.00 MiB memory in use. Of the allocated memory 37.11 GiB is allocated by PyTorch, and 5.18 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/ cuda.html#environment-variables)
W1216 01:36:23.080000 1621711 torch/distributed/elastic/multiprocessing/api.py:897] Sending process 1621717 closing signal SIGTERM
E1216 01:36:23.791000 1621711 torch/distributed/elastic/multiprocessing/api.py:869] failed (exitcode: 1) local_rank: 1 (pid: 1621718) of binary: /usr/bin/ python3
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main
args.func(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 1159, in launch_command
multi_gpu_launcher(args)
File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 793, in multi_gpu_launcher
distrib_run.run(args)
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 910, in run
elastic_launch(
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 138, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 269, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
dpo_finetuning.py FAILED
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2024-12-16_01:36:23
host : gpu1.dei.unipd.it
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 1621718)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
**STACK TRACE TLDR:**
Everything seems to be ok, it can even train on a couple of examples before running out of VRAM.
I think that @qgallouedec might be onto something, as my prompts and responses are quite lenghty. I have noticed that when pre-processing the dataset the trainer will add a crazy amount of padding tokens also.
How can I check if the length of the examples is the culprit? The examples are formatted by the trainer only once the `trainer.train()` method is called.
*NOTE*: I cannot afford to truncate the samples' text, as it is critical to have sometimes those lengthy prompts+answer pairs during training.
| 2,452 |
gp-1108 | 2024-12-20T12:02:25 | Hi, I have solved the issue finally and I am going to leave it here for the posterity.
The issue lay mainly in two things:
1. **Some samples were too long**
2. **The PEFT configuration was not working**
**MANAGING SAMPLE LENGTH**:
I plotted the lengths across a couple of metrics:
![image](https://github.com/user-attachments/assets/3be16126-de18-4c3a-b4f4-1e032e8e7444)
```
[INFO] Prompt lengths
Min length: 22
Max length: 5541
Mean length: 588.0766687657431
Median length: 569.0
STD length: 419.24555148568976
[INFO] Chosen response lengths
Min length: 47
Max length: 4826
Mean length: 192.51637279596977
Median length: 183.0
STD length: 99.76849327730292
[INFO] Rejected response lengths
Min length: 29
Max length: 185
Mean length: 71.0676952141058
Median length: 69.0
STD length: 17.396042841024304
[INFO] Overall lengths (prompt + max(chosen, rejected)
Min length: 81
Max length: 5782
Mean length: 780.6544395465995
Median length: 764.0
STD length: 435.2110251509147
```
You can clearly see that in some cases we get up to 6k length. This is perhaps not ideal.
I have eliminated those from the dataset by using a [modified z-score](https://www.statology.org/modified-z-score/) but you can choose whatever you prefer.
Afterwards, the maximum length was 2k which is a manageable.
**PEFT CONFIGURATION**:
I thought that by passing the `peft_config` param to the `DPOTrainer` it would automatically take care of it.
However, upon closer inspection I could see in the logs that once saving the model I would get:
```
UserWarning: Setting `save_embedding_layers` to `True` as embedding layers found in `target_modules`.
```
Even though my peft configuration did not include the embedding layer in the targets.
```python
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head']
)
```
I resorted to the good old `get_peft_model` method from `peft`. The final setup for the model was as follows:
```python
peft_config = LoraConfig(
r=16,
lora_alpha=32,
lora_dropout=0.1,
target_modules=['q_proj', 'v_proj', 'k_proj', 'o_proj', 'lm_head']
)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
quantization_config=bnb_config,
trust_remote_code=True, # Hardcoded
torch_dtype=torch.bfloat16,
)
model.config.use_cache = False
model.enable_input_require_grads() # To avoid error https://github.com/huggingface/trl/issues/731
model = PeftModel.from_pretrained(
model,
args.peft_model_id,
adapter_name="trainable",
is_trainable=True
)
model.load_adapter(args.peft_model_id, adapter_name="reference")
model = get_peft_model(model, peft_config)
```
Also avoiding the `peft_config` param in the `DPOTrainer` altogether.
I don't know if this is an issue or intended behaviour @qgallouedec
**OTHER IMPROVEMENTS**:
Although I already implemented these in the previous steps, I would like to clarify that setting `per_device_train_batch_size=1` and `gradient_accumulation_steps=4` was also a key part of the solution. Now I am getting a solid 80-90% VRAM usage without any disruption.
| 2,452 |
Kallinteris-Andreas | 2024-12-08T23:00:17 | What is the reason that your model is not a `torch.nn.Module`, my first reaction would be that you are doing something wrong, unless you provide a detailed explanation as to why
can you convert your model to a `torch.nn.Module`?
but if you have a good reason for a custom reward model class, you would have to modify all usages of `self.reward_model` in https://github.com/huggingface/trl/blob/9c5388b69e0842f76edc46a2ff9d0b51e1db4337/trl/trainer/ppo_trainer.py | 2,451 |
hwhyyds | 2024-12-09T06:30:26 | In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output | 2,451 |
asparius | 2024-12-10T14:02:16 | > In my code, I have trained a reward model with three outputs was in a format similar to '{"type1": 1, "type2": -1, "type3": 0}', which is different from the traditional output
I believe you are doing some sort of classification, so you could still have nn-based module for classification part and then map its results to your predefined-rewards. You could use it per-token or per-completion depending on your policy optimization method | 2,451 |
hwhyyds | 2024-12-16T10:58:18 | Such as my scores from GPT-4o, can't be used by nn-based module | 2,451 |
kashif | 2024-12-11T12:31:48 | thanks @NIL-zhuang great catch! | 2,450 |
HuggingFaceDocBuilderDev | 2024-12-11T12:37:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2450). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,450 |
qgallouedec | 2024-12-11T12:45:14 | Do we test this collator somewhere? | 2,450 |
kashif | 2024-12-11T12:50:49 | we do test it.. but more for the labels rather than the content of the ids... let me see if i can add a failing test | 2,450 |
kashif | 2024-12-11T13:18:21 | @qgallouedec added failng tests | 2,450 |
asparius | 2024-12-10T14:23:57 | It is the entropy of the generated sequence by the policy given the prompt. Do you intend to measure another thing? | 2,448 |
hubstrauss | 2024-12-10T15:36:05 | Oh I see, my bad - as the tokens were sampled from the model, you can get a sample based estimation of entropy. Thanks !
So but then, why is the default value of INVALID_LOGPROB set to 1 ? When computing `-logprobs`, these masked tokens contribute -1 each to the sum ? | 2,448 |
asparius | 2024-12-10T18:54:38 | This was already mentioned in #2281. | 2,448 |
HuggingFaceDocBuilderDev | 2024-12-06T12:33:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2447). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,447 |
anhuong | 2024-12-06T17:30:29 | Does this also need to be updated in the [requirements.txt](https://github.com/huggingface/trl/blob/main/requirements.txt#L4)? as it still shows `transformers>=4.46.0` | 2,447 |
kashif | 2024-12-06T18:09:57 | @anhuong I believe the `requirements.txt` is used by the CI and the issue is fixed in the main branch... | 2,447 |
qgallouedec | 2024-12-06T12:24:20 | Thanks for reporting. A patch release is coming asap.
In the meantime, downagrading transformers to 4.46 should work.
```
pip install transformers==4.46
```
Related to https://github.com/huggingface/trl/pull/2381
Keeping the issue open until the release | 2,445 |
gp-1108 | 2024-12-06T14:59:42 | Thanks @qgallouedec, downgrading to the specified version worked! | 2,445 |
qgallouedec | 2024-12-13T22:36:16 | Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2
```
pip install --upgrade trl
``` | 2,445 |
pspdada | 2024-12-20T04:02:00 | > Solved with https://github.com/huggingface/trl/releases/tag/v0.12.2
>
> ```
> pip install --upgrade trl
> ```
Hello, I've noticed that the issue with the latest version of trl==0.13.0 has resurfaced. Since in this version, the requirement has reverted to "transformers>=4.46.0", this problem has reappeared. Could the trl code be fixed to be compatible with the new version of transformers? | 2,445 |
qgallouedec | 2024-12-20T10:29:41 | This issue should be fixed in 0.13. Can you share your system info? (`trl env`) | 2,445 |
pspdada | 2024-12-20T11:56:57 | > This issue should be fixed in 0.13. Can you share your system info? (`trl env`)
I understand what happened with the changes in this part; it was due to an error in my implementation. I apologize for the disturbance. | 2,445 |
melissamao | 2024-12-29T13:22:04 | Same questions. | 2,444 |
HuggingFaceDocBuilderDev | 2024-12-05T19:15:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2443). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,443 |
HuggingFaceDocBuilderDev | 2024-12-05T18:54:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2442). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,442 |
HuggingFaceDocBuilderDev | 2024-12-05T15:00:27 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2441). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,441 |
qgallouedec | 2024-12-06T09:08:45 | Thanks! can you approve? | 2,441 |
HuggingFaceDocBuilderDev | 2024-12-05T14:24:42 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/trl/pr_2440). All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. | 2,440 |
qgallouedec | 2024-12-04T18:15:42 | LGTM, thanks :) | 2,439 |
dame-cell | 2024-12-10T15:19:45 | not really done yet but for now
everything seems to be working
if padding_free is set to True the trainer will not pad and also when padding_free =True attention_mask will not be used
for now here are some task to be done :
- [x] Ensure when padding_Free =True the trainer will not pad
- [x] Ensure that when padding_free = True the trainer will not use or return attention_mask
- [x] Ensure that when padding_free = True we use positon_ids
- [x] make tests
| 2,437 |
dame-cell | 2024-12-11T13:02:27 | most of the stuff is done just some small stuff left like dealing with list and converting to tensor | 2,437 |