Benjamin Bossan
Benjamin Bossan
Your driver version is indeed very low, if possible you could try updating it and checking if that helps. But in your case, it might be something else, as the...
I don't think it's an issue with docker, but rather that another process is occupying your GPU and that's why PyTorch cannot use it properly. At least this is what...
> Should we also add warnings in these three cases? For the `target_modules` case, we do have a warning, however only when the user wants to merge. One could argue...
@Ambitious-idiot Thanks for the report. I wrote a small reproducer: ```python from transformers import AutoModelForCausalLM from peft import LoraConfig, get_peft_model model_id = "facebook/opt-125m" model = AutoModelForCausalLM.from_pretrained(model_id) # config = LoraConfig(trainable_token_indices=[1,...
@yeonjoon-jung01 Please ping me when you're finished so that I know that I can give this another review. Also, if possible, please avoid force pushes or rebases, as those make...
@yeonjoon-jung01 Could you please run `make style`?
My `adapter_config.json` for rank 32 is: ```json { "auto_mapping": null, "base_model_name_or_path": null, "bias": "none", "fan_in_fan_out": false, "gralora_alpha": 64, "gralora_dropout": 0.0, "gralora_k": 2, "hybrid_r": 0, "inference_mode": false, "init_weights": true, "layers_pattern": null,...
> I guess you could add the GraLoRA rank-32 example with a learning rate of 2e-4 for now. Could you please push the experiments to this PR (only the configs,...
@yeonjoon-jung01 Could you please run `make style` :)
Thanks for reporting @XiangZhang-zx. Could you please provide more details so that we can try to reproduce the error? Ideally, we need the following: 1. The training script (or scripts)...