diffusers
diffusers copied to clipboard
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
### Continue training existing model Modified train_unconditional so the model is reloaded if it exists: `export COMMAND="python examples/train_unconditional.py --resolution 32 --num_epochs 10 --train_data_dir training-images --output_dir model"` ``` % ${COMMAND} creating...
**Is your feature request related to a problem? Please describe.** Stable Diffusion is not compute heavy on all its steps. If we keep the diffusion unet on fp16 on GPU...
### Describe the bug I am getting the following error ``` ValueError: The deprecation tuple ('tensor_format', '0.5.0', "If you're running your code in PyTorch, you can safely remove this argument.")...
**Is your feature request related to a problem? Please describe.** Dreambooth can drastically change its output quality between step counts, including to the worse if the chosen learning rate is...
Cast frozen modules to fp16/bf16 when using mixed precision. Add gradient checkpoint command line option. OOMs before on my 8 GB VRAM GPU. With these changes and using `--mixed_precision=fp16 --gradient_checkpointing`...
### Describe the bug If you pass an array of prompts (a list of strings) rather than single prompt (a single string) to the pipeline, under Apple `mps` you never...
### Describe the bug On the first attempt to call `evaluate()` in the train_loop() function, the DDPMPipeline fails due to an incompatible output form the model. This raises an AttributeError:...
### Describe the bug Getting this error from pipeline_stable_diffusion.py ### Reproduction _No response_ ### Logs _No response_ ### System Info On colab notebook: https://colab.research.google.com/github/WASasquatch/easydiffusion/blob/main/Stability_AI_Easy_Diffusion.ipynb
### Describe the bug I have some ckpt files and try to convert it to diffusers in google colab. with this command `!python /content/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path /content/gdrive/MyDrive/Waifu_6e.ckpt --dump_path /content/stable-diffusion-alt` But got...
I am not sure if this has been considered, or already on the roadmap, but I'd love to be able to just throw a CLIP model ID at a pipe...