Build wheels for Python 3.9 and 3.10
🚀 Feature
Build wheels for Python 3.9 and 3.10.
Motivation
It's 2022, and people use newer versions of Python. PyTorch has been supporting Python 3.9 for a while, and the latest stable release supports Python 3.10.
Pitch
I would love to see stuff like *-cp39-cp39-linux_x86_64.whl and *-cp310-cp310-linux_x86_64.whl.
Alternatives
Ask people to build from script, which is inconvenient at best and painful at worst.
Additional context
We can look into it, this is the first request we get for 3.9. 3.7 wheel is for colab, 3.8 is for TPUVM. I am guessing you want 3.9 because you want to use it for XLA:GPU?
Not exactly. This is not a hard requirement for me, and I asked simply because Anaconda defaults to 3.9. However, one day someone will cry for a cp39 wheel because their favorite package only supports 3.9+.
By the way, I am not exactly the first one to request a Python 3.9 wheel: https://github.com/pytorch/xla/issues/2927
Oh Ok I forgot about that one.
I did a quick test on python 3.9 wheel and it failed with
Step #0: clang-8 -Wno-unused-result -Wsign-compare -DNDEBUG -O2 -Wall -fPIC -O2 -isystem /root/anaconda3/envs/pytorch/include -fno-semantic-interposition -fPIC -O2 -isystem /root/anaconda3/envs/pytorch/include -fno-semantic-interposition -D_GLIBCXX_USE_CXX11_ABI=1 -fPIC -I/root/anaconda3/envs/pytorch/include/python3.9 -c torch/csrc/stub.c -o build/temp.linux-x86_64-3.9/torch/csrc/stub.o -Wall -Wextra -Wno-strict-overflow -Wno-unused-parameter -Wno-missing-field-initializers -Wno-write-strings -Wno-unknown-pragmas -Wno-deprecated-declarations -fno-strict-aliasing -Wno-missing-braces
Step #0: clang: error: unknown argument: '-fno-semantic-interposition'
Step #0: clang: error: unknown argument: '-fno-semantic-interposition'
Step #0: error: command '/usr/bin/clang-8' failed with exit code 1
Step #0: The command '/bin/sh -c cd /pytorch && bash xla/scripts/build_torch_wheels.sh ${python_version} ${release_version}' returned a non-zero code: 1
Finished Step #0
something to do with clang-8, need to look into it a bit more.
I don't think this test is indicative of anything. Yes, that test passes with clang-10, but the same bug seen by @nalzok occurs with clang-10 as well. See below:
cmake /home/ubuntu/pytorch/xla/test/cpp -DCMAKE_BUILD_TYPE=Release -DPYTHON_INCLUDE_DIR=/home/ubuntu/anaconda3/include/python3.9 -DPYTHON_LIBRARY=/home/ubuntu/anaconda3/lib/libpython3.9.a
Selected PT/XLA library folder /home/ubuntu/pytorch/xla/build/lib.linux-x86_64-cpython-39
-- The C compiler identification is GNU 9.4.0
-- The CXX compiler identification is Clang 10.0.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/clang++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found PythonLibs: /home/ubuntu/anaconda3/lib/libpython3.9.a (found version "3.9.13")
-- Found CUDA: /usr/local/cuda (found version "12.0")
-- The CUDA compiler identification is NVIDIA 12.0.140
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Caffe2: CUDA detected: 12.0
-- Caffe2: CUDA nvcc is: /usr/local/cuda/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda
-- Caffe2: Header version is: 12.0
-- /usr/local/cuda/lib64/libnvrtc.so shorthash is d7c32a86
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- Autodetected CUDA architecture(s): 8.0 8.0 8.0 8.0 8.0 8.0 8.0 8.0
-- Added CUDA NVCC flags for: -gencode;arch=compute_80,code=sm_80
-- MKL_ARCH: None, set to ` intel64` by default
-- MKL_ROOT /home/ubuntu/anaconda3
-- MKL_LINK: None, set to ` dynamic` by default
-- MKL_INTERFACE_FULL: None, set to ` intel_ilp64` by default
-- MKL_THREADING: None, set to ` intel_thread` by default
-- MKL_MPI: None, set to ` intelmpi` by default
-- Found Torch: /home/ubuntu/pytorch/torch/lib/libtorch.so
Selected XLAC library /home/ubuntu/pytorch/xla/build/lib.linux-x86_64-cpython-39/_XLAC.cpython-39-x86_64-linux-gnu.so
-- Configuring done
-- Generating done
-- Build files have been written to: /home/ubuntu/pytorch/xla/test/cpp/build
make -j
[ 4%] Creating directories for 'googletest'
[ 8%] Performing download step (git clone) for 'googletest'
-- googletest download command succeeded. See also /home/ubuntu/pytorch/xla/test/cpp/build/gtest/src/googletest-
stamp/googletest-download-*.log
[ 12%] Performing update step for 'googletest'
[ 16%] No patch step for 'googletest'
[ 20%] Performing configure step for 'googletest'
-- googletest configure command succeeded. See also /home/ubuntu/pytorch/xla/test/cpp/build/gtest/src/googletest stamp/googletest-configure-*.log
[ 24%] Performing build step for 'googletest'
-- googletest build command succeeded. See also /home/ubuntu/pytorch/xla/test/cpp/build/gtest/src/googletest-stamp/googletest-build-*.log
[ 28%] No install step for 'googletest'
[ 32%] Completed 'googletest'
[ 32%] Built target googletest
make[2]: *** No rule to make target '/home/ubuntu/anaconda3/lib/libpython3.9.a', needed by 'test_ptxla'. Stop.
make[2]: *** Waiting for unfinished jobs....
[ 36%] Building CXX object CMakeFiles/test_ptxla.dir/main.cpp.o
[ 40%] Building CXX object CMakeFiles/test_ptxla.dir/cpp_test_util.cpp.o
[ 44%] Building CXX object CMakeFiles/test_ptxla.dir/metrics_snapshot.cpp.o
[ 48%] Building CXX object CMakeFiles/test_ptxla.dir/test_aten_xla_tensor.cpp.o
[ 52%] Building CXX object CMakeFiles/test_ptxla.dir/test_async_task.cpp.o
[ 56%] Building CXX object CMakeFiles/test_ptxla.dir/test_ir.cpp.o
[ 60%] Building CXX object CMakeFiles/test_ptxla.dir/test_op_by_op_executor.cpp.o
[ 64%] Building CXX object CMakeFiles/test_ptxla.dir/test_replication.cpp.o
[ 72%] Building CXX object CMakeFiles/test_ptxla.dir/test_tensor.cpp.o
[ 72%] Building CXX object CMakeFiles/test_ptxla.dir/test_mayberef.cpp.o
[ 76%] Building CXX object CMakeFiles/test_ptxla.dir/test_xla_util_cache.cpp.o
[ 80%] Building CXX object CMakeFiles/test_ptxla.dir/torch_xla_test.cpp.o
[ 88%] Building CXX object CMakeFiles/test_ptxla.dir/test_xla_backend_intf.cpp.o
[ 88%] Building CXX object CMakeFiles/test_ptxla.dir/test_symint.cpp.o
[ 92%] Building CXX object CMakeFiles/test_ptxla.dir/test_xla_sharding.cpp.o
[ 96%] Building CXX object CMakeFiles/test_ptxla.dir/test_lazy.cpp.o
make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/test_ptxla.dir/all] Error 2
make: *** [Makefile:91: all] Error 2
Failed to build tests: ['/home/ubuntu/pytorch/xla/test/cpp/run_tests.sh', '-B']
Bumping this! I see the are 3.9 docker images published at https://gcr.io/tpu-pytorch/xla so having wheels for them would be a nice next step
If there is a docker for 3.9, there must be a wheel for 3.9. I think it is https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp39-cp39-linux_x86_64.whl
Does it exist compiled package torch-xla 2.2 for python 3.10 under Windows? There is installing by pip from PyPI this one, but just version 1.0.
I don't think we ever built a windows whl, that 1.0 is a placeholder I believe.