flux
flux copied to clipboard
Official inference repo for FLUX.1 models
Hello! I set device_map='balanced' and get images generated in 2.5 minutes (expected in 12-20 seconds), while in pipe.hf_device_map it shows that the devices are distributed like this: { "transformer": "cuda:0",...
https://github.com/black-forest-labs/flux/blob/87f6fff727a377ea1c378af692afb41ae84cbe04/src/flux/modules/autoencoder.py#L159-L180 There is hs variable here that is not being used?
I would be great if you could support transforming the model to apple core ml
gguf
how to run with gguf?
Dear Team Thank you so much for releasing the model.. I am trying to integrate the flux model for some use case for which I requires the unet, and image_encoder....
`black-forest-labs/FLUX.1-dev` runs very slow. it takes about 15min to generate 1344x768 (wxh) image. Has anyone experienced the same or is it just me. ```python pipe = FluxPipeline.from_pretrained(args.model, torch_dtype=torch.bfloat16) #pipe.enable_model_cpu_offload() #save...
First, Welcome!!!  ...then not so welcomed :(  🤷🏻♂️ ...or am I doing it wrong? Should we use replicate to access it?
When I generate images using flux dev fp8, no matter how I debug it, it always outputs a pure black image. Cannot be used normally.Using standard workflow examples. The version...
which GPU does flux run on? running in google cloud: NVIDIA L4, 23034MiB command line: ``` $ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True python demo_gr.py --name flux-dev --device cuda --share ``` got CUDA out of...
I am trying to find the correct setup to run it on M3/36GB memory, without success. Error message is either (running it via a Gradle UI (ref: pictero.com): > UserWarning:...