flux icon indicating copy to clipboard operation
flux copied to clipboard

Official inference repo for FLUX.1 models

Results 165 flux issues
Sort by recently updated
recently updated
newest added

Since ChatGPT launched its imagen image creator that stunned the world for creating ghibli-esque images, can't flux-1-dev do the same thing , but better, this time expanding its reach to...

Is there a way in dev to train a model for an individual character. If yes, how?

### Issue Description **Problem**: When executing `python -m flux.cli --prompt="A beautiful forest"`, the following error occurs: ```python Traceback (most recent call last): File "", line 198, in _run_module_as_main File "",...

## Description of Issue I am trying to perform Inference using the `Flux.1 Schnell` model by running `demo_gr.py`, but I am receiving a cuda import error, although `torch.cuda.is_available()` is returning...

Hi :wave: First of all, thank you for open-sourcing Flux! I am having issues installing `flux` using `pip` due to `ruff` being included in the mandatory dependencies with a fixed...

``` import torch from diffusers import FluxFillPipeline from diffusers.utils import load_image image = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup.png") mask = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup_mask.png") pipe = FluxFillPipeline.from_pretrained("black-forest-labs/FLUX.1-Fill-dev", torch_dtype=torch.bfloat16).to("cuda") image = pipe( prompt="a white paper cup", image=image, mask_image=mask,...

I want to ask in GitHub when I use comfyUI to run the model, the model always runs in CPU; how could I change to GPU? VAE load device: cuda:0,...

Currently, the inference optimization solution (TensorRT) takes a very long time to compile and the UX is not great, and it doesn't support LoRA. Compared to TensorRT, `torch.compile` has a...