intel-extension-for-pytorch icon indicating copy to clipboard operation
intel-extension-for-pytorch copied to clipboard

Stable Diffusion run error after upgrade torch and IPEX 2.0, "Torch is not able to use an Intel GPU. Try running without --use-intel-oneapi'"

Open hustari opened this issue 2 years ago • 4 comments

Describe the issue

Hi there,

I previously run SD with Intel ARC in WSL with torch(1.13.0a) + IPEX very well. and today I upgrade from 1.13 to 2.0 and now failed to run SD-WebUI. the error message I got:

/* sean@DESKTOP-CORE13ARC:~/stable-diffusion-webui$ ./start.sh Python 3.9.16 (main, Feb 22 2023, 01:57:33) [GCC 11.2.0] Commit hash: 2b316c206c84221b94e67456c3811f4df3f699e9 Traceback (most recent call last): File "/home/sean/stable-diffusion-webui/launch.py", line 359, in prepare_environment() File "/home/sean/stable-diffusion-webui/launch.py", line 264, in prepare_environment run_python("import torch; import intel_extension_for_pytorch; assert torch.xpu.is_available(), 'Torch is not able to use an Intel GPU. Try running without --use-intel-oneapi'") File "/home/sean/stable-diffusion-webui/launch.py", line 120, in run_python return run(f'"{python}" -c "{code}"', desc, errdesc) File "/home/sean/stable-diffusion-webui/launch.py", line 96, in run raise RuntimeError(message) RuntimeError: Error running command. Command: "/opt/intel/oneapi/intelpython/latest/bin/python3" -c "import torch; import intel_extension_for_pytorch; assert torch.xpu.is_available(), 'Torch is not able to use an Intel GPU. Try running without --use-intel-oneapi'" Error code: 1 */

and the package I installed like this:

sean@DESKTOP-CORE13ARC:~/stable-diffusion-webui$ pip list |grep torch intel-extension-for-pytorch 2.0.110+xpu open-clip-torch 2.7.0 pytorch-lightning 1.9.4 torch 2.0.1a0+cxx11.abi torchdiffeq 0.2.3 torchmetrics 0.11.4 torchsde 0.2.5 torchvision 0.15.2a0+cxx11.abi

any suggestions will be appreciated!

Thanks!

hustari avatar Sep 06 '23 05:09 hustari

Found some reasons,

ImportError: /home/sean/.local/lib/python3.9/site-packages/intel_extension_for_pytorch/lib/libintel-ext-pt-gpu.so: undefined symbol: _ZN4sycl3_V16detail14tls_code_loc_tD1Ev

hustari avatar Sep 06 '23 05:09 hustari

Which oneAPI version are you using?

jingxu10 avatar Sep 10 '23 21:09 jingxu10

I can repro from a different angle.

Following https://www.intel.com/content/www/us/en/developer/articles/technical/stable-diffusion-with-intel-arc-gpus.html

Build this:

# syntax=docker/dockerfile:1.6
FROM intel/intel-extension-for-pytorch:gpu
ARG DEBIAN_FRONTEND=noninteractive 
RUN update-ca-certificates
RUN --mount=type=cache,target=/var/cache/apt <<EOF
apt update
apt upgrade -y
EOF
RUN --mount=type=cache,target=/var/cache/apt <<EOG
apt install -y python3-venv python3-pip nano tree zstd
EOG
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip <<EOH
pip install --upgrade pip setuptools
EOH
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip <<EOJ
pip install jupyter diffusers transformers accelerate
EOJ
ENTRYPOINT [ "jupyter", "notebook", "--allow-root", "--ip", "0.0.0.0", "--port", "9999" ]
docker build --tag arcsd .
docker run -it --rm --device /dev/dri -p 9999:9999 -v "$PWD/data:/data:rw" arcsd

This is docker engine on Ubuntu 23.04 bare metal. GPU in question is ARC 770 16gb.

Run the notebook code in the https://www.intel.com/content/www/us/en/developer/articles/technical/stable-diffusion-with-intel-arc-gpus.html

and you will get the failure on importing ipex.

ghost avatar Oct 19 '23 05:10 ghost

I found a workaround, using a more up to date docker image that doesn't get many search hits: xpu-jupyter.

Internally this supplies you with a working torch 2.x installation.

Below works, tested today.

# syntax=docker/dockerfile:1.6
FROM intel/intel-extension-for-pytorch:xpu-jupyter
USER root
ARG DEBIAN_FRONTEND=noninteractive 
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip python3 -m pip install diffusers transformers accelerate
RUN --mount=type=cache,mode=0755,target=/root/.cache/pip python3 -m pip install jupyter
ENTRYPOINT [ "jupyter", "notebook", "--allow-root", "--ip", "0.0.0.0", "--port", "9999" ]
docker build --tag arcsd .
docker run -it --rm --device /dev/dri -p 9999:9999 -v "$PWD/data:/data:rw" arcsd

Put this in a notebook:

import intel_extension_for_pytorch as ipex
import torch
from diffusers import StableDiffusionPipeline

# check Intel GPU
print(ipex.xpu.get_device_name(0))

#ignore the warning about image libraries..

# load the Stable Diffusion model
pipe = StableDiffusionPipeline.from_pretrained("/data/stable-diffusion-v1-5", 
                                               safety_checker=None,
                                               torch_dtype=torch.bfloat16,
                                               use_safetensors=True)
# move the model to Intel Arc GPU
pipe = pipe.to("xpu")

# model is ready for submitting queries
for i in range(2):
    for image in pipe("The personification of spring in the form of a gorgeous golden retriever with a smile, (((gorgeous golden retriever))), highly detailed, sharp focus, sun rays, trending on artstation, 4k", num_images_per_prompt=3, height=512, width=512).images:
        display(image)

ghost avatar Oct 20 '23 05:10 ghost