Bedovyy
Bedovyy
HF diffusers can use Multi GPU in parallel using distrifuser or PipeFusion. https://github.com/mit-han-lab/distrifuser https://github.com/PipeFusion/PipeFusion I have tested distrifuser, and the result was quite good. (I used `run_sdxl.py --mode benchmark`, it...
I will add it soon.
I am very sorry for long long wait. I fix the issue. (https://github.com/bedovyy/ComfyUI_NAIDGenerator/commit/c468456f30895074befa3b8b3ffada8837e7a46a)
I guess you are running out VRAM so it use shared GPU memory. could you try decreasing 'GPU weights' on the top, or disable GPU memory following the below guide?...
comfyui doesn't have parallel inference at the moment. you can genereate multiple image at once, but it is just batch job. refer. https://github.com/xdit-project/xDiT https://github.com/DefTruth/Awesome-SD-Inference
I have checked generation result, and the result is same as ComfyUI's. It is different from RTX3060 though, and I think it is because of diffrent GPU calculation. | |...
> I had to update the torch version otherwise i would get the same error as #979 (i know this PR doesn't address this but thought to mention). > >...
> Unfortunately, does not seem to solve the issue for me - no matter what, I still end up with > > ``` > File "C:\Stability Matrix\Packages\Forge\backend\nn\flux.py", line 407, in...
> > As far as I know, A770 cannot use CPU offload, so if you use fp16 or fp8, try to add --disable-ipex-hijack option, or use quantized model. > >...
Thank you for report! I will check it this weekend.