xformers
xformers copied to clipboard
no model named 'torch'
I'm really not used to using command prompt and I'm guessing this is an issue with torch but i even reinstalled it and im still getting this error. Any ideas?
(I know the absolute bare minimum about this stuff. So, please talk to me like I'm an idiot, lol)
(venv) C:\Users\Nutri>pip install -U xformers
Collecting xformers
Using cached xformers-0.0.16.tar.gz (7.3 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
Traceback (most recent call last):
File "C:\Users\Nutri\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\Nutri\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nutri\venv\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nutri\AppData\Local\Temp\pip-build-env-w3jebqad\overlay\Lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nutri\AppData\Local\Temp\pip-build-env-w3jebqad\overlay\Lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\Nutri\AppData\Local\Temp\pip-build-env-w3jebqad\overlay\Lib\site-packages\setuptools\build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Nutri\AppData\Local\Temp\pip-build-env-w3jebqad\overlay\Lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 23, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.```
You should install torch before installing xFormers.
I already had torch, I needed it to install stable diffusion.
I think the problem is that torch
is imported in setup.py
(see here https://github.com/facebookresearch/xformers/blob/main/setup.py#L23).
This is problematic when installing all the dependencies of a project from scratch in an environment. All the dependencies are resolved properly and torch is also going to be installed... but previously, the setup.py
of all projects are invoked and at this stage, torch
has not been installed yet. I don't know exactly the solution to that, but it seems that this breaking the contract of setup.py
The workaround I found is to install the torch dependency in advance. For example:
pip install torch
pip install -r requirements. txt
Thank you, I'll give that a try after work :)
I am also getting this issue, and am unsure how to solve it.
I am adding this package inside a venv
which already has the latest pytorch installed. So I can only assum ethat this "subprocess" doing something in another environment. I not a python/pip expert at all.
Same issue for me, torch
is already installed and working but I can't install xformers
. Seems specific to Windows, as I could install xformers
without issues on macOS the other day.
I'm hitting this issue with poetry on macOS.
I'm hitting this issue with poetry on macOS.
a trick way is to install torch correctly, you make sure yourself.
and then you just pip install xformers --no-dependencies
Yep, installing pytorch from source and then pip install xformers --no-dependencies
works for me.
@jinmingyi1998 do I understand correctly that the suggested workaround requires not putting xformers in my pyproject.toml
and manually installing it instead? That's definitely not an ideal use of poetry
?
Am I understanding correctly that xformers is expressly choosing to disregard the recommendation against "building packages in the destination environment" mentioned in this comment?
That's definitely not a good situation, but we couldn't find a satisfying solution. The problem with not building in the destination environment is that you might end up building xFormers with a different version of Pytorch than the one in your environment, which will break everything. We had some discussion in this PR: https://github.com/facebookresearch/xformers/pull/743
Yes, I saw that discussion. Is it possible to provide some pre-built wheels that build in that relationship? E.g. I could declare a dependency on xformers-pytorch-2-0-1 = "^0.0.20"
. It's a little annoying for the library user that they need to put the version number of pytorch twice, but it's less annoying that not be able to use poetry
properly. And you could still use the base xformers
dependency with custom build if you don't want to use one of the already built xformers binaries?
You can also use the pypi wheels which are already built: https://pypi.org/project/xformers/#history As the binaries are built with a specific version of pytorch, they have that version pinned as a requirement. I hope this helps ...
Hmm, maybe I missed something. The issue above happens when you use xformers = "^0.0.20"
with poetry, which should be fetching one of those wheels. A minimal poetry project with pyproject.toml
:
[tool.poetry]
name = "xformers-test"
version = "0.1.0"
description = ""
authors = []
readme = "README.md"
packages = [{include = "xformers_test"}]
[tool.poetry.dependencies]
python = "^3.11"
xformers = "^0.0.20"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
will crash on poetry install
with
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
...
ModuleNotFoundError: No module named 'torch'
I'm using
xformers-test % poetry --version
Poetry (version 1.5.1)
xformers-test % pip3 --version
pip 22.3.1 from /opt/homebrew/lib/python3.11/site-packages/pip (python 3.11)
Also running into this issue. My pip requirements file has these versions:
torch==2.0.1
notebook==6.5.4
transformers==4.30.0
matplotlib==3.7.1
scikit-learn==1.2.2
pandas==2.0.2
xformers==0.0.20
and I am running in a venv with py3.10. It looks to me that either 1) xformers is uninstalling torch before its own install or 2) xformers install is ignoring venv paths and installing on the machine natively (and so it does not see an installed torch dependency).
I tried pip install --pre xformers
, pip install xformers==0.0.20
, pip install xformers
with the same result:
Collecting xformers
Using cached xformers-0.0.20.tar.gz (7.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [17 lines of output]
Traceback (most recent call last):
File "/Users/daryafilippova/dev/environments/pytorch_2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/daryafilippova/dev/environments/pytorch_2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/daryafilippova/dev/environments/pytorch_2/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "/private/var/folders/ft/jrq7289j5d5_mvl5phb5pnlm0000gn/T/pip-build-env-jjxyy34d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "/private/var/folders/ft/jrq7289j5d5_mvl5phb5pnlm0000gn/T/pip-build-env-jjxyy34d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/private/var/folders/ft/jrq7289j5d5_mvl5phb5pnlm0000gn/T/pip-build-env-jjxyy34d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "/private/var/folders/ft/jrq7289j5d5_mvl5phb5pnlm0000gn/T/pip-build-env-jjxyy34d/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 23, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
You can also use the pypi wheels which are already built:
I see that there are no prebuilt binaries for MacOS in PyPI, which would explain why it tries to build locally.
xformers is not compatible with MacOS
@danthe3rd Facebook should put that in the README in the install instructions. Would save so many of us wasted time.
(Sure it says, "Recommended, Windows & Linux", but that's much less clear than actually saying "Does not work on Mac")
HuggingFace complains that xformers is not available. I know that's HF's fault (or maybe torch?), but it's annoying that I can't silence the warning or that the warning exists at all. I'll figure out where the warning is coming from and file an issue there.
I am getting the same issue, I have a Mac M1 Pro chip.
I ensured that pytorch is installed before trying to install xformers.
>>> print(torch.__version__)
2.0.1
I still get the same error:
Collecting xformers
Using cached xformers-0.0.20.tar.gz (7.6 MB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [21 lines of output]
Traceback (most recent call last):
File "/Users/sidhartharoy/research-stage/llm-databricks/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/Users/sidhartharoy/research-stage/llm-databricks/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/sidhartharoy/research-stage/llm-databricks/env/lib/python3.11/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/mv/wh9dwtcd2wg8x2d82s0rbstm0000gq/T/pip-build-env-j67vrkzj/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/mv/wh9dwtcd2wg8x2d82s0rbstm0000gq/T/pip-build-env-j67vrkzj/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "/private/var/folders/mv/wh9dwtcd2wg8x2d82s0rbstm0000gq/T/pip-build-env-j67vrkzj/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 488, in run_setup
self).run_setup(setup_script=setup_script)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/private/var/folders/mv/wh9dwtcd2wg8x2d82s0rbstm0000gq/T/pip-build-env-j67vrkzj/overlay/lib/python3.11/site-packages/setuptools/build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 23, in <module>
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
I tried the following options:
pip install --pre -U xformers
pip install xformers --no-dependencies
pip install ninja
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
I still get the exact same error as shown above.
I completely forgot I started this question my bad... I ended up downloading the "A1111 WebUI Easy Installer and Launcher". It will install all dependencies and everything you need and has an option to install x-formers. Worked like a charm.
--no-dependencies
getting same on linux with torch 2.0.1
this thread is cursed
Here is my solution in my environment: Firstly my server is aarch64, and maybe there is no such problem on the x86_64 platform at all. I got the same error messages just like above when I tried to use pip install xformers:
ModuleNotFoundError: No module named 'torch'
But I already installed torch, so I followed the suggestions in this issue, I tried --no-dependencies and --pre -U, unfortunately nothing works.
Then I suddenly found I didn't install wheel yet, maybe things will change if I install it
pip install wheel
Indeed, after the wheel is installed, "ModuleNotFoundError: No module named 'torch'" is gone away, it started to compile but I got a new error message:
ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1900, in _run_ninja_build subprocess.run( File "/stable-diffusion-webui/runtime/python/3.10.6/lib/python3.10/subprocess.py", line 524, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
I installed ninja
pip install ninja
But still same compile errors
Then I tried to download the source code of xformers:
git clone https://github.com/facebookresearch/xformers.git
git checkout v0.0.16rc425
pip install .
Still no lucky, has same compile errors.
Then I looked the error messages again, and found a clue:
error: ‘vld1q_f32_x2’ was not declared in this scope
I searched the error, and it is said it's a problem of gcc in aarch64, and will be fixed in gcc-8.4.1-1
I just have gcc-10.3.1 at hand, so I tried it and it works!
CC=/gcc1031/bin/gcc CXX=/gcc1031/bin/c++ pip install .
Building wheels for collected packages: xformers Building wheel for xformers (setup.py) ... done Created wheel for xformers: filename=xformers-0.0.16+6f3c20f.d20231026-cp310-cp310-linux_aarch64.whl size=7641635 sha256=222f900bf3f4917368b6aa1108080ccdd59d1c2589e2553a2c94dea57cee86f0 Stored in directory: /tmp/pip-ephem-wheel-cache-hnw74sa9/wheels/6f/f5/81/43807df029d66aff5bf231868b0c3f7fbde0b32ed35d595fe7 Successfully built xformers DEPRECATION: pytorch-lightning 1.7.6 has a non-standard dependency specifier torch>=1.9.. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.; python_version >= "3.7". pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at https://github.com/pypa/pip/issues/12063 Installing collected packages: xformers Successfully installed xformers-0.0.16+6f3c20f.d20231026
Also struggling with this issue now on macOS M1. Is there any reasonable work around? Would want to avoid running in Docker Desktop when debugging unit tests.
Here is my solution in my environment: Firstly my server is aarch64, and maybe there is no such problem on the x86_64 platform at all. I got the same error messages just like above when I tried to use pip install xformers:
ModuleNotFoundError: No module named 'torch'
But I already installed torch, so I followed the suggestions in this issue, I tried --no-dependencies and --pre -U, unfortunately nothing works. Then I suddenly found I didn't install wheel yet, maybe things will change if I install it
pip install wheel
Indeed, after the wheel is installed, "ModuleNotFoundError: No module named 'torch'" is gone away, it started to compile but I got a new error message:ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/cpp_extension.py", line 1900, in _run_ninja_build subprocess.run( File "/stable-diffusion-webui/runtime/python/3.10.6/lib/python3.10/subprocess.py", line 524, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.
I installed ninja
pip install ninja
But still same compile errors Then I tried to download the source code of xformers:git clone https://github.com/facebookresearch/xformers.git
git checkout v0.0.16rc425
pip install .
Still no lucky, has same compile errors. Then I looked the error messages again, and found a clue:error: ‘vld1q_f32_x2’ was not declared in this scope
I searched the error, and it is said it's a problem of gcc in aarch64, and will be fixed in gcc-8.4.1-1 I just have gcc-10.3.1 at hand, so I tried it and it works!
CC=/gcc1031/bin/gcc CXX=/gcc1031/bin/c++ pip install .
Building wheels for collected packages: xformers Building wheel for xformers (setup.py) ... done Created wheel for xformers: filename=xformers-0.0.16+6f3c20f.d20231026-cp310-cp310-linux_aarch64.whl size=7641635 sha256=222f900bf3f4917368b6aa1108080ccdd59d1c2589e2553a2c94dea57cee86f0 Stored in directory: /tmp/pip-ephem-wheel-cache-hnw74sa9/wheels/6f/f5/81/43807df029d66aff5bf231868b0c3f7fbde0b32ed35d595fe7 Successfully built xformers DEPRECATION: pytorch-lightning 1.7.6 has a non-standard dependency specifier torch>=1.9.. pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of pytorch-lightning or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063 DEPRECATION: torchsde 0.2.5 has a non-standard dependency specifier numpy>=1.19.; python_version >= "3.7". pip 24.0 will enforce this behaviour change. A possible replacement is to upgrade to a newer version of torchsde or contact the author to suggest that they release a version with a conforming dependency specifiers. Discussion can be found at pypa/pip#12063 Installing collected packages: xformers Successfully installed xformers-0.0.16+6f3c20f.d20231026
The real solution. Many thanks.
@MichaelMalike so it pip-installs, but when you say it "works", does that mean that applications will execute properly? I can imagine linker or runtime errors if different versions of C/C++ are being used to run the same executable.
(i.e. I'm hoping it won't be necessary to re-build the whole environment using the new new gcc.)
@MichaelMalike so it pip-installs, but when you say it "works", does that mean that applications will execute properly? I can imagine linker or runtime errors if different versions of C/C++ are being used to run the same executable. (i.e. I'm hoping it won't be necessary to re-build the whole environment using the new new gcc.)
My environment has no Nvidia card, so I only did a very simple test:
python3.10 -m xformers.info
xFormers 0.0.16+6f3c20f.d20231026 memory_efficient_attention.cutlassF: available memory_efficient_attention.cutlassB: available memory_efficient_attention.flshattF: unavailable memory_efficient_attention.flshattB: unavailable memory_efficient_attention.smallkF: available memory_efficient_attention.smallkB: available memory_efficient_attention.tritonflashattF: unavailable memory_efficient_attention.tritonflashattB: unavailable swiglu.fused.p.cpp: available is_triton_available: False is_functorch_available: False pytorch.version: 1.13.1 pytorch.cuda: not available build.info: available build.cuda_version: None build.python_version: 3.10.6 build.torch_version: 1.13.1 build.env.TORCH_CUDA_ARCH_LIST: None build.env.XFORMERS_BUILD_TYPE: None build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None build.env.NVCC_FLAGS: None build.env.XFORMERS_PACKAGE_FROM: None source.privacy: open source
From the above result, at least the xformers built by new gcc can be used together with other old components. I'm not sure it's ok or not in the real interaction with other old components.
pip install wheel
this solved it for me!
pip install xformers --no-dependencies
I personally don't like these workarounds. But I need the package installed. Since I am using a poetry environment, I ran the following commands:
poetry run python -m pip install torch
poetry run python -m pip install xformers==<version> --no-dependencies
Then, for my project:
poetry install
Again, not great.