Benjamin Bossan

Results 819 comments of Benjamin Bossan

Thanks for this report @maximus-21. For me to try to reproduce, could you please share your config yaml too?

Thanks for the additional information. I had to make some modifications, as still some info was missing, but I could get this to run: ```python import os import torch import...

I don't have experience with DeepSpeed, so I can't really help you here. But I wanted to mention that we're currently adding a [PEFT + DS guide](https://moon-ci-docs.huggingface.co/docs/peft/pr_1489/en/accelerate/deepspeed) to the PEFT...

I'm not an expert on stable diffusion, but AFAIK, there is no special handling required to fine-tune the text encoder when it comes to PEFT itself. You can use LoRA...

You should be able to use prompt learning techniques such as prompt-tuning too. What I meant is that methods not based on prompt learning, such as LoRA, IA³, BOFT, etc....

Unfortunately, I also never came across a use case to fine tine the LM of a SD model and there are no examples I'm aware of. Note that `TaskType` is...

Note that you don't need to indicate a task type if the task you're training does not correspond to any of the existing ones. As to the rest, it really...

`original_module.weight` is a proxy for the currently active adapter. So in general, it will point to `modules_to_save.default.weight` but if you activate a different adapter or deactivate the adapter, that can...

Thanks for reporting, I can replicate the issue as you described. Some further tests that I did: - with 2 GPUs, the memory is not freed, even w/o quantization -...

Thanks a lot @ZiadHelal for offering to work on this feature. Please go ahead and submit a PR. You can also create a draft PR that does not yet cover...