xformers
xformers copied to clipboard
Windows build instructions fail because of git submodule checkout issues
🐛 Bug
Performing a Windows build using the pip command given in README.md fails due to multiple errors involving git submodules.
Command
pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
To Reproduce
Steps to reproduce the behavior:
- Run the command: pip install -v -U git+https://github.com/facebookresearch/xformers.git@main#egg=xformers
See bottom of bug report for complete output.
Expected behavior
Expected behavior is for this to work ;)
Environment
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A
OS: Microsoft Windows 10 Pro
GCC version: (Rev1, Built by MSYS2 project) 11.3.0
Clang version: 14.0.0
CMake version: version 3.20.0-rc3
Libc version: N/A
Python version: 3.10.8 | packaged by conda-forge | (main, Nov 24 2022, 14:07:00) [MSC v.1916 64 bit (AMD64)] (64-bit runtime)
Python platform: Windows-10-10.0.19044-SP0
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3090
Nvidia driver version: 526.47
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] numpy==1.23.4
[pip3] torch==1.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 11.3.1 h59b6b97_2
[conda] mkl 2021.4.0 haa95532_640
[conda] mkl-service 2.4.0 py310h2bbff1b_0
[conda] mkl_fft 1.3.1 py310ha0764ea_0
[conda] mkl_random 1.2.2 py310h4ed8f06_0
[conda] numpy 1.23.4 py310h60c9a35_0
[conda] numpy-base 1.23.4 py310h04254f7_0
[conda] pytorch 1.12.1 py3.10_cuda11.3_cudnn8_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] torchvision 0.13.1 py310_cu113 pytorch
- PyTorch Version (e.g., 1.0): 1.12.1
- OS (e.g., Linux): Windows 10
- How you installed PyTorch (
conda
,pip
, source): conda - Build command you used (if compiling from source):
- Python version: 3.10.8
- CUDA/cuDNN version: 11.7.99
- GPU models and configuration: 3090 RTX
- Any other relevant information:
Additional context
Here is the output (abbreviated):
Running command git symbolic-ref -q HEAD
refs/heads/main
Resolved https://github.com/facebookresearch/xformers.git to commit 71205ec0993239779f8669b3d16d9df56d099d49
Running command git submodule update --init --recursive -q
error: unable to create file docs/classcutlass_1_1epilogue_1_1thread_1_1LinearCombinationRelu_3_01ElementOutput___00_01Count_00_014d4e40c4295be6a8d8778d86e94fe14a.html: Filename too long
error: unable to create file docs/classcutlass_1_1epilogue_1_1thread_1_1LinearCombinationRelu_3_01ElementOutput___00_01Count_00_01int_00_01float_00_01Round_01_4.html: Filename too long
error: unable to create file docs/classcutlass_1_1epilogue_1_1warp_1_1FragmentIteratorComplexTensorOp_3_01WarpShape___00_01Operato65e8dd1d709c1257fe4e30825dcc5f06.html: Filename too long
error: unable to create file
... output abbreviated ...
docs/structcutlass_1_1transform_1_1threadblock_1_1RegularTileIterator_3_01Shape___00_01Element___00_078e1f4b2964afcce5387420c9c8eaea8.html: Filename too long
error: unable to create file docs/structcutlass_1_1transform_1_1threadblock_1_1RegularTileIterator_3_01Shape___00_01Element___00_0bc37beaa523707a55987f4ffcc372fcd.html: Filename too long
error: unable to create file examples/29_ampere_3xtf32_fast_accurate_tensorop_complex_gemm/29_ampere_3xtf32_fast_accurate_tensorop_complex_gemm.cu: Filename too long
Unable to checkout '319a389f42b776fae5701afcb943fc03be5b5c25' in submodule path 'third_party/flash-attention/csrc/flash_attn/cutlass'
Failed to recurse into submodule path 'third_party/flash-attention'
error: subprocess-exited-with-error
× git submodule update --init --recursive -q did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: git submodule update --init --recursive -q
cwd: C:\Users\Bart\AppData\Local\Temp\pip-install-84fgvshv\xformers_c1553b89adf842ce8ea75d715d4c5386
error: subprocess-exited-with-error
× git submodule update --init --recursive -q did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
We now have binaries for windows (download windows-2019.zip
from here).
Can you check if you can pip-install the one that corresponds to your pytorch/python/cuda version?
cc @AbdBarho
How would I do this with a pip install command? I'm also a little confused as to which version I need. As you can see, the Python env I'm working in (this happens to be for Stable Diffusion v2) has pulled in a version of PyTorch built for CUDA 11.3 but my system has CUDA v11.7 installed. I don't see a build for Torch 1.12.1 and CUDA v11.7, although there is one for v11.6.
@trzy we are working on setting up wheels for simpler install in the future.
In the mean time, find the wheel closest your setup from https://github.com/facebookresearch/xformers/actions/runs/3566916718 in this order 1) os 2) python version 3) torch version 4) cuda version
after downloading the .whl
file, install it with pip, e.g.:
pip install ./xformers-ubuntu-22.04-py3.8-torch1.13.0+cu116.whl
@AbdBarho How can I find the wheel from the action page result? I'm looking at https://github.com/facebookresearch/xformers/actions/runs/3566916718/jobs/5993968756#logs but I cannot find the artificats anywhere (as per the documentation).
you have to click on "Summary" to have the artefacts ready to download: https://github.com/facebookresearch/xformers/actions/runs/3566916718
I tried installing the appropriate wheel for 12.1, but it downgraded PyTorch to 12.0 and installed the cpu version.
It seems work after I reinstalled the correct pytorch version.
i have the same issue trying to install using pip