brian6091

Results 24 issues of brian6091

Running with gradient checkpointing prevents LoRA weight updates. > Description: Ubuntu 18.04.6 LTS > diffusers==0.10.2 > lora-diffusion==0.0.3 > torchvision @ https://download.pytorch.org/whl/cu116/torchvision-0.14.0%2Bcu116-cp38-cp38-linux_x86_64.whl > transformers==4.25.1 > xformers @ https://github.com/brian6091/xformers-wheels/releases/download/0.0.15.dev0%2B4c06c79/xformers-0.0.15.dev0+4c06c79.d20221205-cp38-cp38-linux_x86_64.whl > > `Accelerate`...

bug

Not strictly a LoRA change, but quite easy to fit into the current code. The idea is that squeezing an element-wise nonlinearity in between the Down and Up transformations can...

Need to check how this interacts when xformers is enabled pipeline.enable_attention_slicing() pipeline.enable_vae_slicing()

Bombs using optimizers that require additional inputs to step()

- [ ] save full model in diffusers format (overrides save only trained parameters) - [ ] when only saving trainable parameters, option to push to tracker - [X] init...

https://github.com/ShivamShrirao/diffusers/pull/178