Fooocus
Fooocus copied to clipboard
Moving model to GPU...
Moving model to GPU. Takes a long time before each generation. Is this bug? macOS
I'm also having this problem.
Why do you have to do a memory management load before each generation for 30 seconds or more: [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.02916753850877285, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 31.17 seconds
same issue on MacOs. Has anyone found a way to shave off those 30 seconds form every generation?
it takes me 30 seconds on windows with a 3060ti, is this normal?
Just adding my agreement with the comments above. I'm also on MacOS, M1 Macbook pro. It loads the model onto the GPU for a long time before each generation. This problem seems unique to Fooocus vs other sdxl UIs.
In win 11 doesn't work file swapping, i tried it with an Intel(R) Core(TM) i7-13700H, 16,0 GB RAM and rtx 4050 6GBVRAM
I'm having the same issue with M1 Max, it moves the model to the GPU on every generation, even with the --disable-offload-from-vram flag. It does seem to be a Fooocus-specific issue as no other UIs have this problem.
I was playing with,
this seems not to load the model every single time, only tested on MacBook Pro 14"
Command
python entry_with_update.py --always-cpu --unet-in-fp8-e5m2 --attention-split
This is not a bug, your disk is just a bit slow or you're using a GPU with insufficient VRAM, where Fooocus then offloads into RAM. Feel free to re-test with the latest version of Fooocus, which might improve your performance.
@mashb1t - I'm facing the same issue, I have NVIDIA GeForce GTX 1650 Ti Mobile (4Gb of VRAM) could this be because of insufficient memory?
Yes, slow model loading time can also in zhis case have to do with your VRAM being below the minimum to load the complete SDXL model, which is 8GB. Every bit over 4GB will be moved to RAM and if also insufficient then to swap, depending on your OS settings also directly to disk.
I have a 3090 with 24gb vram and have the same issue
12GB vram still the same