xpu not supported?
Hi, I have ADL GT1 GPU and would like to use ipex. I have installed the following versions of software:
intel-opencl-icd 22.23.23405+i755~u20.04
intel-level-zero-gpu 1.3.23405+i755~u20.04
intel-oneapi-runtime-dpcpp-cpp-common 2022.1.0-376
intel-extension-for-pytorch 1.11.08
torch 1.11.0
clinfo sees the card:
Platform Name Intel(R) OpenCL HD Graphics
But when I run the following code:
import torch
import intel_extension_for_pytorch as ipex
f = torch.zeros((1,2,3)).to("xpu")
I get
RuntimeError: PyTorch is not linked with support for xpu devices
Am I missing a requirement or is xpu not yet supported? I want to use ipex with Arc A380, but if it isn't supported yet...
Am I missing a requirement or is xpu not yet supported? I want to use ipex with Arc A380, but if it isn't supported yet...
XPU is not supported yet. It is something to add in the near future. Please stay tuned.
Am I missing a requirement or is xpu not yet supported? I want to use ipex with Arc A380, but if it isn't supported yet...
XPU is not supported yet. It is something to add in the near future. Please stay tuned.
Any update on this may I ask @jgong5? The tensorflow counterpart has just been released and is tested working on my machine (at least) (https://blog.tensorflow.org/2022/10/accelerating-tensorflow-on-intel-data-center-gpu-flex-series.html), so I was just wondering how much progess has been made for pytorch :+1:
@tedliosu About when you posted this, release v1.10.200 was released with XPU support based on xpu-master branch.
Unfortunately as you can tell by the version, it's based on a fairly old Torch (1.10) whereas 1.13 was just released. The new Intel Tensorflow extension supports the latest Tensorflow.
@tedliosu About when you posted this, release v1.10.200 was released with XPU support based on xpu-master branch.
Unfortunately as you can tell by the version, it's based on a fairly old Torch (1.10) whereas 1.13 was just released. The new Intel Tensorflow extension supports the latest Tensorflow.
@xsacha The extension being based on a fairly old version of torch shouldn't be too much of a problem, as I can always use a virtualenv to avoid any potential package dependency conflicts, and coming from using Ubuntu daily I'm pretty used to using software that is at least one or two versions behind lol. May I ask though if the newest intel_extension_for_pytorch release supports XMX-accelerated training and inferencing though? That'll be a key factor in my GPU-purchasing decisions in the near future so I just want to make sure that when I get an Intel Arc I am able to use the hardware to its full capabilities :smile:
EDIT - Also sorry I almost forgot to ask, but will the intel-optimized-pytorch docker image be updated then to support XPUs at some point in the very near future as well?
You can use the docker file from here: https://github.com/intel/intel-extension-for-pytorch/commit/e67ce74a5b4a8c428c0bf4a4ebe0737c7f68b086 It's in xpu-master branch and was updated to use XPU
You can use the docker file from here: e67ce74 It's in xpu-master branch and was updated to use XPU
@xsacha Thank you so much! May I ask however do you know if the newest intel_extension_for_pytorch release supports XMX-accelerated training and inferencing though? :eyes:
Yes, intel_extension_for_pytorch supports XMX via oneDNN.
Instructions in the README for gpu seem to be incorrect. There is no pytorch version 1.10.0a0 available:
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu
Looking in links: https://developer.intel.com/ipex-whl-stable-xpu
ERROR: Could not find a version that satisfies the requirement torch==1.10.0a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0)
ERROR: No matching distribution found for torch==1.10.0a0
I don't see the whl files in https://www.intel.com/content/dam/develop/external/us/en/documents/ipex/whl-stable-xpu.html either.
Instructions in the README for gpu seem to be incorrect. There is no pytorch version 1.10.0a0 available:
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu Looking in links: https://developer.intel.com/ipex-whl-stable-xpu ERROR: Could not find a version that satisfies the requirement torch==1.10.0a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0) ERROR: No matching distribution found for torch==1.10.0a0I don't see the whl files in https://www.intel.com/content/dam/develop/external/us/en/documents/ipex/whl-stable-xpu.html either.
@tripzero Oh yea I ran into the same issue earlier when trying to get IPEX up and running on my machine; I simply changed the pytorch version to 1.10.0a0+git3d5f2d4 and then it worked fine :smile: You're right though that the documentation should be corrected to account for this discrepancy between what is documented and what actually exists in the repositories.
https://developer.intel.com/ipex-whl-stable-xpu
Can you access https://developer.intel.com/ipex-whl-stable-xpu? What do you see in that webpage?
Instructions in the README for gpu seem to be incorrect. There is no pytorch version 1.10.0a0 available:
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu Looking in links: https://developer.intel.com/ipex-whl-stable-xpu ERROR: Could not find a version that satisfies the requirement torch==1.10.0a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0) ERROR: No matching distribution found for torch==1.10.0a0I don't see the whl files in https://www.intel.com/content/dam/develop/external/us/en/documents/ipex/whl-stable-xpu.html either.
@tripzero Oh yea I ran into the same issue earlier when trying to get IPEX up and running on my machine; I simply changed the pytorch version to
1.10.0a0+git3d5f2d4and then it worked fine 😄 You're right though that the documentation should be corrected to account for this discrepancy between what is documented and what actually exists in the repositories.
May I know which python version did you use when failed to get the package with 1.10.0a0, but got it with 1.10.0a0+git3d5f2d4?
Instructions in the README for gpu seem to be incorrect. There is no pytorch version 1.10.0a0 available:
python -m pip install torch==1.10.0a0 -f https://developer.intel.com/ipex-whl-stable-xpu Looking in links: https://developer.intel.com/ipex-whl-stable-xpu ERROR: Could not find a version that satisfies the requirement torch==1.10.0a0 (from versions: 1.11.0, 1.12.0, 1.12.1, 1.13.0) ERROR: No matching distribution found for torch==1.10.0a0I don't see the whl files in https://www.intel.com/content/dam/develop/external/us/en/documents/ipex/whl-stable-xpu.html either.
@tripzero Oh yea I ran into the same issue earlier when trying to get IPEX up and running on my machine; I simply changed the pytorch version to
1.10.0a0+git3d5f2d4and then it worked fine smile You're right though that the documentation should be corrected to account for this discrepancy between what is documented and what actually exists in the repositories.May I know which python version did you use when failed to get the package with
1.10.0a0, but got it with1.10.0a0+git3d5f2d4?
It was with python 3.9 I believe.
https://developer.intel.com/ipex-whl-stable-xpu
Can you access https://developer.intel.com/ipex-whl-stable-xpu? What do you see in that webpage?

@jingxu10
On Python3.8, it correctly installed that torch 1.10a0 without requiring the git hash. However, it does run in to this issue when importing torch: OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory
Suggests MKL dependencies were not correctly supplied. I resolved it by installing mkl.
Then when I imported ipex I got: ImportError: libmkl_sycl.so.2: cannot open shared object file: No such file or directory
Suggested SYCL/DPCPP dependencies weren't present. I resolved it by installing mkl-dpcpp
To keep it simple: pip3 install mkl-dpcpp
On Python3.8, it correctly installed that torch 1.10a0 without requiring the git hash. However, it does run in to this issue when importing torch: OSError: libmkl_intel_lp64.so.2: cannot open shared object file: No such file or directory
Suggests MKL dependencies were not correctly supplied. I resolved it by installing mkl-devel. Then when I imported ipex I got: ImportError: libmkl_sycl.so.2: cannot open shared object file: No such file or directory
To use IPEX and PyTorch on GPU, you need to have oneMKL from oneAPI BaseKit. You can activate the environment via the command below. Suppose you installed oneAPI to /opt/intel/oneapi.
source /opt/intel/oneapi/setvars.sh
https://developer.intel.com/ipex-whl-stable-xpu
Can you access https://developer.intel.com/ipex-whl-stable-xpu? What do you see in that webpage?
@jingxu10
This is a correct web page content. You should be able to get it installed without the git hash. Would you help to double confirm?

https://developer.intel.com/ipex-whl-stable-xpu
Can you access https://developer.intel.com/ipex-whl-stable-xpu? What do you see in that webpage?
@jingxu10
This is a correct web page content. You should be able to get it installed without the git hash. Would you help to double confirm?

@jingxu10 So yea I guess it works lol; @tripzero were you maybe trying to install it using the provided Dockerfile here? Cuz I know that the first time I ran into the same issue as you did was when I was trying to install using the docker file.
I was not trying to install via the dockerfile. I am using python 3.10.8 though. Perhaps that's the issue?
Yes, that's the issue. The prebuilt wheel files are for Python 3.6 to Python 3.9.
Is XPU supported now?