flux
flux copied to clipboard
Please recognize environment variables and prevent re-download of models
Many developers download all the model/vae files in advance, a task that can take hours, so there's no need to download them all over again.
You have provided a way to set environmental variables specifying the location of local copies of the model files, and this ought to prevent re-downloading ... in principle. But this doesn't happen.
After setting the environment variables in a shell script:
export FLUX_DEV="<local path>"
export AE="<local path>"
Then entering the venv, then establishing and verifying that the environment variables are present:
(.venv) ...$ . shell_script_with_environment_variables.sh (.venv) ...$ env | grep -E "AE|FLUX" # this confirms the presence of the variables
, the various provided demonstrations then proceed to download a new, unexplained model file.
Am I missing something? Thanks!
-- UPDATE --
Okay, apparently I missed something. I downloaded the requisite files from huggingface, totaling 34.2 GB, then I followed the instructions to complete the installation as explained above, then:
$ source .venv/bin/activate
(.venv) $ pip install -e '.[all]'
-- all successful and uneventful. Then the moment of truth:
(.venv) $ python -m flux --name flux-dev --loop
Instead of prompting for user input, this command tries to download a file named "pytorch_model.bin" that is 44.5 GB in size (not a typo). Unfortunately, because I live on Planet Earth and have a mortal's Internet connection, this download fails over and over again, and a download link isn't provided, which if it existed would allow various strategies to acquire this unexpected additional download.
Please edit your install instructions to accommodate this outcome -- explain what and why. Thanks!
The 44.5 GB pytorch.bin file came from the google/t5-v1_1-xxl model: https://huggingface.co/google/t5-v1_1-xxl/tree/main The other dependencies are: openai/clip-vit-large-patch1 and Falconsai/nsfw_image_detection
This 44GB file was giving me errors, being too large to fit in RAM I suppose.
So I deleted it and tried to run using models (flex-schnell) which I have manually downloaded, environment variables are correctly set, but the scripts always start downloading this 44GB file anyway. Looking at the code that I see no way to prevent this from happening.
A diffuser based script works fine, but I am interested trying demo_st for img2img.
@souravzzz thanks for your helpful directions. It's strange that environments like ComfyUI can support use of flux-dev and flux-schnell without this gigantic additional download, but Python can't.
Not our topic, but the absence of:
FluxTransformer2DModel.from_single_file( ... )
... also remains an obstacle to the use of local models in Python. Please pardon this digression to a different topic.
Also curious about this! @lutusp were you able to get this working?
@vq108-1 Still no success. Apparently there's a protocol involving multiple files including a key XML file that describes the resource, but AFAIK this isn't documented anywhere. I emphasize this is just a guess, based on the kinds of error messages I see when trying to get past the absence of a "from_single_file()" function.
Obviously I could tear into ComfyUI's Python code to see how they solve this issue, but so far that has seemed too daunting to contemplate. Apparently there aren't enough people interested in direct Python scripting the new "flux-*" models. I have working scripts for many of the earlier models, but each of them have the missing function available in their respective support libraries.
export FLUX_DEV="