Benjamin Bossan
Benjamin Bossan
See discussion in #1063 Starting from PyTorch 2.4, there is a warning when `torch.load` is called without setting the weights_only argument. This is because in the future, the default will...
Also: - Remove 2.0.1 - Updgrade 2.3.0 to 2.3.1 - Use index https://download.pytorch.org/whl/torch, as torch_stable does not have 2.4.0 (yet?)
See #869, #1962 Edit: Also https://github.com/huggingface/transformers/issues/33852.
See also https://github.com/huggingface/diffusers/pull/9453 The idea of hotswapping an adapter is the following: We can already load multiple adapters, e.g. two LoRAs, at the same time. But sometimes, we want to...
Right now, loading a PEFT config saved with a more recent PEFT version than is currently installed will lead to errors when new arguments are added to the config in...
### Feature request DoRA could be made faster and to use less memory if the base result were reused for DoRA. However, this is only equivalent if there is no...
PEFT allows [mixed batch adapter inference](https://huggingface.co/docs/peft/developer_guides/lora#inference-with-different-lora-adapters-in-the-same-batch), i.e. when predicting, the same batch can use different adapters by passing the `adapter_names` argument. However, when users pass an adapter name that does...
Add support for [torchao](https://github.com/pytorch/ao/tree/main). The current status is: - only LoRA explicitly supported - only linear layer - `int8_weight_only` works fully - `int8_dynamic_activation_int8_weight` only works partly (as `dequantize` is not...
@sayakpaul and I investigated an issue with loading a LyCORIS LoRA checkpoint which uses DoRA in diffusers. For some reason, we couldn't get the shapes of the DoRA scale vector...
# What does this PR do? When loading a LoRA adapter, so far, there was only a warning when there were unexpected keys in the checkpoint. Now, there is also...