intel-extension-for-pytorch icon indicating copy to clipboard operation
intel-extension-for-pytorch copied to clipboard

IPEX not working with Stable Diffusion Face Restoration Tools (facexlib)

Open HyunJae5463 opened this issue 1 year ago • 13 comments

Describe the bug

I will just Copy&Paste what i have posted here https://github.com/vladmandic/automatic/issues/2628#issuecomment-1859189547 because i was told so.

Issue Description

When using IPEX, Face Restoration is not working in any Stable Diffusion version. According to the creator of the Reactor Extension (a face swap extension for Stable Diffusion) this is also the reason why Face Swap Extensions for SD are not working. Everything works fine when using DirectML and OpenVINO. Just IPEX is not working. It also seems that using the CPU instead of GPU works but thats not really an option.

Steps to reproduce the problem

  1. Go to Extras in Automatic1111 or Process in SD.NEXT
  2. Add any image with a face
  3. Check either GFPGAN or CodeFormer
  4. Output is 1:1 the same as the Input (It should look slightly different than the original if Face Restoration is working) image

Or when using Reactor

  1. Enable Reactor and use any image of a face you want to swap.
  2. Generate image via text2image.
  3. The output looks blurred and pixelated image

Version Platform Description

Win 11 Ryzen 5 5600x Intel Arc A770 16GB 32GB Ram

Browser: Edge

Versions

intel-extension-for-pytorch==2.0.110+ Also tested on v2.1.10+xpu

HyunJae5463 avatar Dec 17 '23 14:12 HyunJae5463

Guess no update on this? Like it was not even "accepted" as an issue.

HyunJae5463 avatar Jan 14 '24 09:01 HyunJae5463

@jingxu10 Could you please help to reproduce?

tye1 avatar Jan 16 '24 07:01 tye1

will reproduce

jingxu10 avatar Jan 26 '24 22:01 jingxu10

@ashokei @min-jean-cho FYI.

jingxu10 avatar Jan 26 '24 22:01 jingxu10

Thanks for looking into it. I don't know if this Error is related to the issue, i'm getting it when i start SD but i guess it just has something to do with using a different and not the "original" Torch.

C:\AI\Automatic1111\stable-diffusion-webui\venv\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(

and

You are running torch 2.0.0a0+gite9ebda2.
The program is tested to work with torch 2.0.0.

HyunJae5463 avatar Jan 31 '24 10:01 HyunJae5463

Any update on if this will be fixed in 2.2.0

HyunJae5463 avatar Feb 08 '24 17:02 HyunJae5463

Double that - happy to assist with reproducing the issue and providing any details as necessary.

redrum-llik avatar Feb 08 '24 23:02 redrum-llik

So...did anyone manage to reproduce?

HyunJae5463 avatar Feb 23 '24 20:02 HyunJae5463

@HyunJae5463 This doesn't seem like an IPEX issue. Tried the Automatic1111(current master branch) GFPGAN and Codeformer restoration on 1) cpu (without ipex) and 2) on gpu (with ipex), and for both the cases the output image seems very similar to the input image. Please try running the non-ipex cpu run and check if you are getting the expected output from GFPGAN/Codeformer

vishnumadhu365 avatar Feb 27 '24 17:02 vishnumadhu365

@HyunJae5463 This doesn't seem like an IPEX issue. Tried the Automatic1111(current master branch) GFPGAN and Codeformer restoration on 1) cpu (without ipex) and 2) on gpu (with ipex), and for both the cases the output image seems very similar to the input image. Please try running the non-ipex cpu run and check if you are getting the expected output from GFPGAN/Codeformer

But isn't that exactly the problem? Its supposed to not like the original if it works. download copy And did you try Reactor and tried to swap a face? https://github.com/Gourieff/sd-webui-reactor

I even tried via WSL and got the same results and errors in Reactor.

HyunJae5463 avatar Feb 29 '24 03:02 HyunJae5463

I found a temporarily solution by adding --use-cpu gfpgan codeformer to webui-user.bat

HyunJae5463 avatar Mar 09 '24 09:03 HyunJae5463

--use-cpu gfpgan codeformer

Just tried the same on Forge but had no luck. Is that specific to A1111? I suspect it gets ignored after I pass --use-ipex which sets the device to CPU - is it even possible to specify different device for GFPGAN/Codeformer?

UPD: indeed, Forge does not seem to support the same level of differentiation as A1111 does. As an easy workaround, one may override the device in devices.py as following:

 modules/devices.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/modules/devices.py b/modules/devices.py
index 08d0d706..5dc4a375 100644
--- a/modules/devices.py
+++ b/modules/devices.py
@@ -51,9 +51,9 @@ cpu: torch.device = torch.device("cpu")
 fp8: bool = False
 device: torch.device = model_management.get_torch_device()
 device_interrogate: torch.device = model_management.text_encoder_device()  # for backward compatibility, not used now
-device_gfpgan: torch.device = model_management.get_torch_device()  # will be managed by memory management system
+device_gfpgan: torch.device = torch.device("cpu")  # will be managed by memory management system
 device_esrgan: torch.device = model_management.get_torch_device()  # will be managed by memory management system
-device_codeformer: torch.device = model_management.get_torch_device()  # will be managed by memory management system
+device_codeformer: torch.device = torch.device("cpu")  # will be managed by memory management system
 dtype: torch.dtype = model_management.unet_dtype()
 dtype_vae: torch.dtype = model_management.vae_dtype()
 dtype_unet: torch.dtype = model_management.unet_dtype()

redrum-llik avatar Mar 19 '24 06:03 redrum-llik

--use-cpu gfpgan codeformer

Just tried the same on Forge but had no luck. Is that specific to A1111? I suspect it gets ignored after I pass --use-ipex which sets the device to CPU - is it even possible to specify different device for GFPGAN/Codeformer?

UPD: indeed, Forge does not seem to support the same level of differentiation as A1111 does. As an easy workaround, one may override the device in devices.py as following:

 modules/devices.py | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/modules/devices.py b/modules/devices.py
index 08d0d706..5dc4a375 100644
--- a/modules/devices.py
+++ b/modules/devices.py
@@ -51,9 +51,9 @@ cpu: torch.device = torch.device("cpu")
 fp8: bool = False
 device: torch.device = model_management.get_torch_device()
 device_interrogate: torch.device = model_management.text_encoder_device()  # for backward compatibility, not used now
-device_gfpgan: torch.device = model_management.get_torch_device()  # will be managed by memory management system
+device_gfpgan: torch.device = torch.device("cpu")  # will be managed by memory management system
 device_esrgan: torch.device = model_management.get_torch_device()  # will be managed by memory management system
-device_codeformer: torch.device = model_management.get_torch_device()  # will be managed by memory management system
+device_codeformer: torch.device = torch.device("cpu")  # will be managed by memory management system
 dtype: torch.dtype = model_management.unet_dtype()
 dtype_vae: torch.dtype = model_management.vae_dtype()
 dtype_unet: torch.dtype = model_management.unet_dtype()

I've already opened a Feature Request here: https://github.com/lllyasviel/stable-diffusion-webui-forge/issues/565

Lets hope they add it.

HyunJae5463 avatar Mar 19 '24 12:03 HyunJae5463