intel-extension-for-pytorch
intel-extension-for-pytorch copied to clipboard
IPEX not working with Stable Diffusion Face Restoration Tools (facexlib)
Describe the bug
I will just Copy&Paste what i have posted here https://github.com/vladmandic/automatic/issues/2628#issuecomment-1859189547 because i was told so.
Issue Description
When using IPEX, Face Restoration is not working in any Stable Diffusion version. According to the creator of the Reactor Extension (a face swap extension for Stable Diffusion) this is also the reason why Face Swap Extensions for SD are not working. Everything works fine when using DirectML and OpenVINO. Just IPEX is not working. It also seems that using the CPU instead of GPU works but thats not really an option.
Steps to reproduce the problem
- Go to Extras in Automatic1111 or Process in SD.NEXT
- Add any image with a face
- Check either GFPGAN or CodeFormer
- Output is 1:1 the same as the Input (It should look slightly different than the original if Face Restoration is working)
Or when using Reactor
- Enable Reactor and use any image of a face you want to swap.
- Generate image via text2image.
- The output looks blurred and pixelated
Version Platform Description
Win 11 Ryzen 5 5600x Intel Arc A770 16GB 32GB Ram
Browser: Edge
Versions
intel-extension-for-pytorch==2.0.110+ Also tested on v2.1.10+xpu
Guess no update on this? Like it was not even "accepted" as an issue.
@jingxu10 Could you please help to reproduce?
will reproduce
@ashokei @min-jean-cho FYI.
Thanks for looking into it. I don't know if this Error is related to the issue, i'm getting it when i start SD but i guess it just has something to do with using a different and not the "original" Torch.
C:\AI\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
warn(
and
You are running torch 2.0.0a0+gite9ebda2.
The program is tested to work with torch 2.0.0.
Any update on if this will be fixed in 2.2.0
Double that - happy to assist with reproducing the issue and providing any details as necessary.
So...did anyone manage to reproduce?
@HyunJae5463 This doesn't seem like an IPEX issue. Tried the Automatic1111(current master branch) GFPGAN and Codeformer restoration on 1) cpu (without ipex) and 2) on gpu (with ipex), and for both the cases the output image seems very similar to the input image. Please try running the non-ipex cpu run and check if you are getting the expected output from GFPGAN/Codeformer
@HyunJae5463 This doesn't seem like an IPEX issue. Tried the Automatic1111(current master branch) GFPGAN and Codeformer restoration on 1) cpu (without ipex) and 2) on gpu (with ipex), and for both the cases the output image seems very similar to the input image. Please try running the non-ipex cpu run and check if you are getting the expected output from GFPGAN/Codeformer
But isn't that exactly the problem? Its supposed to not like the original if it works.
And did you try Reactor and tried to swap a face? https://github.com/Gourieff/sd-webui-reactor
I even tried via WSL and got the same results and errors in Reactor.
I found a temporarily solution by adding --use-cpu gfpgan codeformer to webui-user.bat
--use-cpu gfpgan codeformer
Just tried the same on Forge but had no luck. Is that specific to A1111? I suspect it gets ignored after I pass --use-ipex which sets the device to CPU - is it even possible to specify different device for GFPGAN/Codeformer?
UPD: indeed, Forge does not seem to support the same level of differentiation as A1111 does.
As an easy workaround, one may override the device in devices.py as following:
modules/devices.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/modules/devices.py b/modules/devices.py
index 08d0d706..5dc4a375 100644
--- a/modules/devices.py
+++ b/modules/devices.py
@@ -51,9 +51,9 @@ cpu: torch.device = torch.device("cpu")
fp8: bool = False
device: torch.device = model_management.get_torch_device()
device_interrogate: torch.device = model_management.text_encoder_device() # for backward compatibility, not used now
-device_gfpgan: torch.device = model_management.get_torch_device() # will be managed by memory management system
+device_gfpgan: torch.device = torch.device("cpu") # will be managed by memory management system
device_esrgan: torch.device = model_management.get_torch_device() # will be managed by memory management system
-device_codeformer: torch.device = model_management.get_torch_device() # will be managed by memory management system
+device_codeformer: torch.device = torch.device("cpu") # will be managed by memory management system
dtype: torch.dtype = model_management.unet_dtype()
dtype_vae: torch.dtype = model_management.vae_dtype()
dtype_unet: torch.dtype = model_management.unet_dtype()
--use-cpu gfpgan codeformer
Just tried the same on Forge but had no luck. Is that specific to A1111? I suspect it gets ignored after I pass --use-ipex which sets the device to CPU - is it even possible to specify different device for GFPGAN/Codeformer?
UPD: indeed, Forge does not seem to support the same level of differentiation as A1111 does. As an easy workaround, one may override the device in
devices.pyas following:modules/devices.py | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/modules/devices.py b/modules/devices.py index 08d0d706..5dc4a375 100644 --- a/modules/devices.py +++ b/modules/devices.py @@ -51,9 +51,9 @@ cpu: torch.device = torch.device("cpu") fp8: bool = False device: torch.device = model_management.get_torch_device() device_interrogate: torch.device = model_management.text_encoder_device() # for backward compatibility, not used now -device_gfpgan: torch.device = model_management.get_torch_device() # will be managed by memory management system +device_gfpgan: torch.device = torch.device("cpu") # will be managed by memory management system device_esrgan: torch.device = model_management.get_torch_device() # will be managed by memory management system -device_codeformer: torch.device = model_management.get_torch_device() # will be managed by memory management system +device_codeformer: torch.device = torch.device("cpu") # will be managed by memory management system dtype: torch.dtype = model_management.unet_dtype() dtype_vae: torch.dtype = model_management.vae_dtype() dtype_unet: torch.dtype = model_management.unet_dtype()
I've already opened a Feature Request here: https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/565
Lets hope they add it.