functorch
functorch copied to clipboard
Cannot build on Jetson platform
Hi, I am trying to compile functorch from source on a Jetson platform. Specifically, it is Jetpack 5.0.2 on NVIDIA Xavier AGX. I install PyTorch by following the instructions at https://forums.developer.nvidia.com/t/pytorch-for-jetson/72048 and the pip wheels method.
Some info of the system
- Jetson Linux 35.1 (Based on Ubuntu 20.04)
- Python 3.8.10
- CUDA 11.4
- PyTorch version is 1.12.0.
Then I do the following
pip3 install --user "git+https://github.com/pytorch/[email protected]"
Here is the output from the terminal.
Collecting git+https://github.com/pytorch/[email protected]
Cloning https://github.com/pytorch/functorch.git (to revision v0.2.0) to /tmp/pip-req-build-ccny2r8e
Running command git clone -q https://github.com/pytorch/functorch.git /tmp/pip-req-build-ccny2r8e
Running command git checkout -q 7f23d3dd67ebbb02acc268c3381fd5815a395808
Collecting torch<1.13,>=1.12
Using cached torch-1.12.1-cp38-cp38-manylinux2014_aarch64.whl (55.7 MB)
Requirement already satisfied: typing-extensions in /external/usr/local/lib/python3.8/dist-packages (from torch<1.13,>=1.12->functorch==0.2.0) (4.4.0)
Building wheels for collected packages: functorch
Building wheel for functorch (setup.py) ... error
ERROR: Command errored out with exit status 1:
command: /usr/bin/python3 -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-req-build-ccny2r8e/setup.py'"'"'; __file__='"'"'/tmp/pip-req-build-ccny2r8e/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' bdist_wheel -d /tmp/pip-wheel-9pttj430
cwd: /tmp/pip-req-build-ccny2r8e/
Complete output (115 lines):
Building wheel functorch-0.2.0
running bdist_wheel
/usr/local/lib/python3.8/dist-packages/torch/utils/cpp_extension.py:387: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
warnings.warn(msg.format('we could not find ninja.'))
running build
running build_py
creating build
creating build/lib.linux-aarch64-3.8
creating build/lib.linux-aarch64-3.8/functorch
copying functorch/version.py -> build/lib.linux-aarch64-3.8/functorch
copying functorch/__init__.py -> build/lib.linux-aarch64-3.8/functorch
creating build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/make_functional.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/top_operators_github_usage.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/partitioners.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/compilers.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/decompositions.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/__init__.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/pytree_hacks.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/python_key.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/custom_function.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/aot_autograd.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/named_members_polyfill.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/fx_minifier.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/eager_transforms.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/vmap.py -> build/lib.linux-aarch64-3.8/functorch/_src
copying functorch/_src/monkey_patching.py -> build/lib.linux-aarch64-3.8/functorch/_src
creating build/lib.linux-aarch64-3.8/functorch/experimental
copying functorch/experimental/__init__.py -> build/lib.linux-aarch64-3.8/functorch/experimental
copying functorch/experimental/batch_norm_replacement.py -> build/lib.linux-aarch64-3.8/functorch/experimental
creating build/lib.linux-aarch64-3.8/functorch/compile
copying functorch/compile/__init__.py -> build/lib.linux-aarch64-3.8/functorch/compile
running build_ext
building 'functorch._C' extension
creating build/temp.linux-aarch64-3.8
creating build/temp.linux-aarch64-3.8/tmp
creating build/temp.linux-aarch64-3.8/tmp/pip-req-build-ccny2r8e
creating build/temp.linux-aarch64-3.8/tmp/pip-req-build-ccny2r8e/functorch
creating build/temp.linux-aarch64-3.8/tmp/pip-req-build-ccny2r8e/functorch/csrc
aarch64-linux-gnu-gcc -pthread -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O2 -Wall -g -fstack-protector-strong -Wformat -Werror=format-security -g -fwrapv -O2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -DFUNCTORCH_BUILD_MAIN_LIB -I/tmp/pip-req-build-ccny2r8e -I/usr/local/lib/python3.8/dist-packages/torch/include -I/usr/local/lib/python3.8/dist-packages/torch/include/torch/csrc/api/include -I/usr/local/lib/python3.8/dist-packages/torch/include/TH -I/usr/local/lib/python3.8/dist-packages/torch/include/THC -I/usr/include/python3.8 -c /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.cpp -o build/temp.linux-aarch64-3.8/tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.o -O3 -std=c++14 -fdiagnostics-color=always -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="_gcc" -DPYBIND11_STDLIB="_libstdcpp" -DPYBIND11_BUILD_ABI="_cxxabi1013" -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=1
In file included from /tmp/pip-req-build-ccny2r8e/functorch/csrc/PlumbingHelper.h:8,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchingMetaprogramming.h:9,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesHelper.h:14,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.cpp:7:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchedTensorImpl.h:70:15: error: ‘c10::IntArrayRef at::functorch::BatchedTensorImpl::strides_custom() const’ marked ‘override’, but does not override
70 | IntArrayRef strides_custom() const override;
| ^~~~~~~~~~~~~~
In file included from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchingMetaprogramming.h:9,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesHelper.h:14,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.cpp:7:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h: In function ‘std::tuple<at::Tensor, at::Tensor> at::functorch::grid_sampler_3d_backward_generated_plumbing(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, bool, std::array<bool, 2>)’:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:1320:139: error: no matching function for call to ‘at::_ops::grid_sampler_3d_backward::call(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t&, int64_t&, bool&, std::array<bool, 2>&)’
1320 | return at::_ops::grid_sampler_3d_backward::call(grad_output, input, grid, interpolation_mode, padding_mode, align_corners, output_mask);
| ^
In file included from /usr/local/lib/python3.8/dist-packages/torch/include/ATen/ops/grid_sampler_3d_backward.h:20,
from /usr/local/lib/python3.8/dist-packages/torch/include/ATen/Functions.h:612,
from /usr/local/lib/python3.8/dist-packages/torch/include/ATen/ATen.h:15,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesHelper.h:8,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.cpp:7:
/usr/local/lib/python3.8/dist-packages/torch/include/ATen/ops/grid_sampler_3d_backward_ops.h:49:46: note: candidate: ‘static std::tuple<at::Tensor, at::Tensor> at::_ops::grid_sampler_3d_backward::call(const at::Tensor&, const at::Tensor&, const at::Tensor&, int64_t, int64_t, bool)’
49 | static ::std::tuple<at::Tensor,at::Tensor> call(const at::Tensor & grad_output, const at::Tensor & input, const at::Tensor & grid, int64_t interpolation_mode, int64_t padding_mode, bool align_corners);
| ^~~~
/usr/local/lib/python3.8/dist-packages/torch/include/ATen/ops/grid_sampler_3d_backward_ops.h:49:46: note: candidate expects 6 arguments, 7 provided
In file included from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchingMetaprogramming.h:9,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesHelper.h:14,
from /tmp/pip-req-build-ccny2r8e/functorch/csrc/BatchRulesScatterOps.cpp:7:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h: At global scope:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4535:83: error: ‘at::OptionalIntArrayRef’ has not been declared
4535 | at::Tensor upsample_linear1d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4550:98: error: ‘at::OptionalIntArrayRef’ has not been declared
4550 | at::Tensor upsample_linear1d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4565:85: error: ‘at::OptionalIntArrayRef’ has not been declared
4565 | at::Tensor upsample_bilinear2d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4580:100: error: ‘at::OptionalIntArrayRef’ has not been declared
4580 | at::Tensor upsample_bilinear2d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4595:86: error: ‘at::OptionalIntArrayRef’ has not been declared
4595 | at::Tensor upsample_trilinear3d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4610:101: error: ‘at::OptionalIntArrayRef’ has not been declared
4610 | at::Tensor upsample_trilinear3d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4625:84: error: ‘at::OptionalIntArrayRef’ has not been declared
4625 | at::Tensor upsample_bicubic2d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4640:99: error: ‘at::OptionalIntArrayRef’ has not been declared
4640 | at::Tensor upsample_bicubic2d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, bool align_corners, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4655:84: error: ‘at::OptionalIntArrayRef’ has not been declared
4655 | at::Tensor upsample_nearest1d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4670:99: error: ‘at::OptionalIntArrayRef’ has not been declared
4670 | at::Tensor upsample_nearest1d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4685:84: error: ‘at::OptionalIntArrayRef’ has not been declared
4685 | at::Tensor upsample_nearest2d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4700:99: error: ‘at::OptionalIntArrayRef’ has not been declared
4700 | at::Tensor upsample_nearest2d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4715:84: error: ‘at::OptionalIntArrayRef’ has not been declared
4715 | at::Tensor upsample_nearest3d_vec_generated_plumbing(const at::Tensor & input, at::OptionalIntArrayRef output_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:4730:99: error: ‘at::OptionalIntArrayRef’ has not been declared
4730 | at::Tensor upsample_nearest3d_backward_vec_generated_plumbing(const at::Tensor & grad_output, at::OptionalIntArrayRef output_size, at::IntArrayRef input_size, c10::optional<at::ArrayRef<double>> scale_factors) {
| ^~~~~~~~~~~~~~~~~~~
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h: In function ‘at::Tensor at::functorch::expand_copy_generated_plumbing(const at::Tensor&, c10::IntArrayRef, bool)’:
/tmp/pip-req-build-ccny2r8e/functorch/csrc/VmapGeneratedPlumbing.h:5414:22: error: ‘at::_ops::expand_copy’ has not been declared
5414 | return at::_ops::expand_copy::call(self, size, implicit);
| ^~~~~~~~~~~
error: command 'aarch64-linux-gnu-gcc' failed with exit status 1
----------------------------------------
ERROR: Failed building wheel for functorch
Running setup.py clean for functorch
Failed to build functorch
It looks like many at operations are not found. Any help is appreciated. Thank you!
Hey @huyaoyu,
Sorry for the delayed reply. functorch 0.2.0 is an older version of functorch and does not necessarily support different platforms. Are you able to get PyTorch 1.13 on Jetson? That would be the easier way to use functorch on Jetson.
Hey @huyaoyu,
Sorry for the delayed reply. functorch 0.2.0 is an older version of functorch and does not necessarily support different platforms. Are you able to get PyTorch 1.13 on Jetson? That would be the easier way to use functorch on Jetson.
Thank you @zou3519 , I did not find a solution with PyTorch 1.12 on Jetson. It seems that functorch will be integrated into PyTorch starting from version 1.13. Then I shall wait until NVIDIA releases their customized build of PyTorch 1.13 on Jetsons.