diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.

Results 1193 diffusers issues
Sort by recently updated
recently updated
newest added

It looks like the safety checker uses some NSFW concept embeddings generated from CLIP to filter out unsafe content. I wonder if we can get the original concepts in text...

NOTE: updating PR description and moving verification code to [notebook](https://github.com/williamberman/vq-diffusion-notebook) ## Porting the VQ-diffusion VQVAE for the ITHQ dataset to diffusers. Add `convert_vq_diffusion_to_diffusers.py` script: This script initially only converts the...

I think Huggingface's Dreambooth is the only popular SD implementation that also uses Prior Preservation Loss, so I've been motivated to get it working, but the results have been terrible,...

### Description: I've added the Euler Ancestral Scheduler to the Stable Diffusion Pipeline. The code has been adapted from: https://github.com/crowsonkb/k-diffusion the scheduler currently works; however, I need to clean it...

when I run the dml_onnx.py file in (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py I get an error like this: (amd_venv) C:\amd-stable-diffusion\difusers-dml\examples\inference>dml_onnx.py Fetching 19 files: 100%|███████████████████████████████████████████ | 19/19 [00:00

stale

**Is your feature request related to a problem? Please describe.** The textual inversion script currently creates a dict with one term and its associated embedding. Usual scripts load one or...

# Why ? While stable diffusion democratized the access to text to image generative models, it can still be relatively long to generate an image on consumer GPUs. The GPU...

Thank you guys for throwing in the documentation. Segregating this from the other Issue. Is there any possibility to include more of the k-diffusion samplers? https://github.com/crowsonkb/k-diffusion/blob/master/k_diffusion/sampling.py Heuler for example would...

good first issue
New scheduler

### Describe the bug The ONNX support doesn't work with `CUDAExecutionProvider` I installed `onnxruntime-gpu` Running `import onnxruntime as ort ort.get_device()` results `GPU` and `ort.get_available_providers()` results `['CPUExecutionProvider', 'TensorrtExecutionProvider', 'CUDAExecutionProvider']` but diffusers...

The torch example gives parameter ```revision="fp16"```, can onnx model do the same optimization? Current onnx inference(using CUDAExecutionProvider) is slower than torch version, and used more gpu memory than torch version(12G...