tiny-cuda-nn icon indicating copy to clipboard operation
tiny-cuda-nn copied to clipboard

Cannot Install Tinycuda inside nerfstudio

Open theworldisonfire opened this issue 3 years ago • 48 comments

Trying to use nerfstudio. I get to the second line in this step.

> pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
> pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

And I am getting this error log.

(nerfstudio) C:\Users\Dylan>pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch
  Cloning https://github.com/NVlabs/tiny-cuda-nn/ to c:\users\dylan\appdata\local\temp\pip-req-build-ywxto29a
  Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ 'C:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a'
  Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit ea09e160960ee37a067edb4ad65a255705307961
  Running command git submodule update --init --recursive -q
  Preparing metadata (setup.py) ... done
Building wheels for collected packages: tinycudann
  Building wheel for tinycudann (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py bdist_wheel did not run successfully.
  │ exit code: 1
  ╰─> [36 lines of output]
      Building PyTorch extension for tiny-cuda-nn version 1.6
      Obtained compute capability 86 from PyTorch
      running bdist_wheel
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
      running build
      running build_py
      creating build
      creating build\lib.win-amd64-cpython-38
      creating build\lib.win-amd64-cpython-38\tinycudann
      copying tinycudann\modules.py -> build\lib.win-amd64-cpython-38\tinycudann
      copying tinycudann\__init__.py -> build\lib.win-amd64-cpython-38\tinycudann
      running egg_info
      creating tinycudann.egg-info
      writing tinycudann.egg-info\PKG-INFO
      writing dependency_links to tinycudann.egg-info\dependency_links.txt
      writing top-level names to tinycudann.egg-info\top_level.txt
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      reading manifest file 'tinycudann.egg-info\SOURCES.txt'
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      copying tinycudann\bindings.cpp -> build\lib.win-amd64-cpython-38\tinycudann
      running build_ext
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.8) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
        warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
      building 'tinycudann_bindings_86._C' extension
      creating build\dependencies
      creating build\dependencies\fmt
      creating build\dependencies\fmt\src
      creating build\src
      creating build\temp.win-amd64-cpython-38
      creating build\temp.win-amd64-cpython-38\Release
      creating build\temp.win-amd64-cpython-38\Release\tinycudann
      "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/tools/util/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/fmt/include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Dylan\.conda\envs\nerfstudio\include -IC:\Users\Dylan\.conda\envs\nerfstudio\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
      format.cc
      C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory
      error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
  ERROR: Failed building wheel for tinycudann
  Running setup.py clean for tinycudann
Failed to build tinycudann
Installing collected packages: tinycudann
  Running setup.py install for tinycudann ... error
  error: subprocess-exited-with-error

  × Running setup.py install for tinycudann did not run successfully.
  │ exit code: 1
  ╰─> [23 lines of output]
      Building PyTorch extension for tiny-cuda-nn version 1.6
      Obtained compute capability 86 from PyTorch
      running install
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
        warnings.warn(
      running build
      running build_py
      running egg_info
      writing tinycudann.egg-info\PKG-INFO
      writing dependency_links to tinycudann.egg-info\dependency_links.txt
      writing top-level names to tinycudann.egg-info\top_level.txt
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:411: UserWarning: Attempted to use ninja as the BuildExtension backend but we could not find ninja.. Falling back to using the slow distutils backend.
        warnings.warn(msg.format('we could not find ninja.'))
      reading manifest file 'tinycudann.egg-info\SOURCES.txt'
      writing manifest file 'tinycudann.egg-info\SOURCES.txt'
      running build_ext
      C:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py:813: UserWarning: The detected CUDA version (11.8) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
        warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
      building 'tinycudann_bindings_86._C' extension
      "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/cutlass/tools/util/include -IC:\Users\Dylan\AppData\Local\Temp\pip-req-build-ywxto29a/dependencies/fmt/include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\Dylan\.conda\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\Dylan\.conda\envs\nerfstudio\include -IC:\Users\Dylan\.conda\envs\nerfstudio\Include "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\ATLMFC\include" "-IC:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Auxiliary\VS\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
      format.cc
      C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\14.34.31933\include\cstdlib(12): fatal error C1083: Cannot open include file: 'math.h': No such file or directory
      error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.34.31933\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure

× Encountered error while trying to install package.
╰─> tinycudann

note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.

theworldisonfire avatar Nov 29 '22 22:11 theworldisonfire

Same here. I am on RTX 4090

Tobe2d avatar Nov 30 '22 21:11 Tobe2d

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls.

I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment.

I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk.

Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

theworldisonfire avatar Nov 30 '22 22:11 theworldisonfire

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls.

I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment.

I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk.

Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

mints7 avatar Dec 03 '22 10:12 mints7

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

This is the kind of response I was looking for. Thanks for letting me know what worked for you. I will try this as soon as I have a moment.

theworldisonfire avatar Dec 03 '22 21:12 theworldisonfire

I'm struggling to resolve this issue as well, with the install failing at the "fatal error C1083". Similar situation, running Windows 10, trying to install this package as part of the Nerf Studio installation in Conda.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

I just tried this and unfortunately I still have the same issue fatal error C1083: Cannot open include file: 'math.h': No such file or directory

thomall avatar Dec 08 '22 14:12 thomall

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Thanks for your response but it still not worked for me, I got the error as below:

Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capability 86 from PyTorch
running install
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py:813: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'tinycudann_bindings_86._C' extension
Emitting ninja build file /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o
c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9:0,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                 from /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:34:
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/Context.h:25:67: warning: type attributes ignored after type is already defined [-Wattributes]
 enum class TORCH_API Float32MatmulPrecision {HIGHEST, HIGH, MEDIUM};
                                                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<tcnn::cpp::Context, at::Tensor> Module::fwd(at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:108:35: error: converting to ‘std::tuple<tcnn::cpp::Context, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = tcnn::cpp::Context; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = tcnn::cpp::Context; _T2 = at::Tensor]’
   return { std::move(ctx), output };
                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor> Module::bwd(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:169:34: error: converting to ‘std::tuple<at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = at::Tensor&; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = at::Tensor; _T2 = at::Tensor]’
   return { dL_dinput, dL_dparams };
                                  ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor, at::Tensor> Module::bwd_bwd_input(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:240:47: error: converting to ‘std::tuple<at::Tensor, at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; <template-parameter-2-2> = void; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’
   return {dL_ddLdoutput, dL_dparams, dL_dinput};
                                               ^
[2/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/cutlass_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/cutlass_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

^@^@[3/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/fully_fused_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/fully_fused_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1808, in _run_ninja_build
    subprocess.run(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 127, in <module>
    setup(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 765, in build_extensions
    build_ext.build_extensions(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
    self._build_extensions_serial()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 549, in build_extension
    objects = self.compiler.compile(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 586, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1487, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1824, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

Cerf-Volant425 avatar Jan 02 '23 21:01 Cerf-Volant425

this is just garbage

andzejsp avatar Jan 22 '23 09:01 andzejsp

Anyone solve this issue? Having the same error:

E:\Program Files\CUDA\v11.7\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory

      cpp_api.cu

      ninja: build stopped: subcommand failed.

avrum avatar Mar 12 '23 19:03 avrum

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Wow!!! So good!

pkunliu avatar Apr 06 '23 14:04 pkunliu

I've managed to get closer than this error log shows. the only remaining issue was the last error, that link.exe failed to launch, which is part of VS. The .wheel issue is new after doing some reinstalls and uninstalls. I don't think its a cuda issue, it could be a version issue. Though I'm having trouble getting everything to match up inside the nerfstudio environment. I've tried installing ninja, cmake, updating setup tools, manually updating wheel, as well as doing full reinstalls of VS2022, anaconda, cuda, python, and all related tools. Ive changed up the environment variables through following help articles. Maybe I messed something up there Idk. Still no Luck though and getting frustrated Ive set it aside for now until maybe someone can help.

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it.

first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty?

if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder.

AND then,place them in the corresponding folder.

if not.....Have a good luck.......

Works for me! Thanks!

jike5 avatar Apr 14 '23 11:04 jike5

I'm on Ubuntu. CUDA binaries (perhaps specifically nvcc) weren't in my PATH so I had to add them with PATH=/usr/local/cuda-11/bin:$PATH

dylanhu7 avatar Apr 19 '23 04:04 dylanhu7

It looks like you're using the Windows operating system.I use Ubuntu.I also encountered this problems.I solved it according to the following method. You can try it. first check“/tiny-cuda-nn-master/dependencies/fmt” and “tiny-cuda-nn-master/dependencies/cutlass” ,Aren't they empty? if yes,click link"https://github.com/NVlabs/tiny-cuda-nn/tree/master/dependencies" . download "fmt"fmt @ b0c8263" and "cutlass"cutlass @ 1eb6355 . download All contents of the folder. AND then,place them in the corresponding folder. if not.....Have a good luck.......

Thanks for your response but it still not worked for me, I got the error as below:

Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capability 86 from PyTorch
running install
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/easy_install.py:144: EasyInstallDeprecationWarning: easy_install command is deprecated. Use build and pip and other standards-based tools.
  warnings.warn(
running bdist_egg
running egg_info
writing tinycudann.egg-info/PKG-INFO
writing dependency_links to tinycudann.egg-info/dependency_links.txt
writing top-level names to tinycudann.egg-info/top_level.txt
reading manifest file 'tinycudann.egg-info/SOURCES.txt'
writing manifest file 'tinycudann.egg-info/SOURCES.txt'
installing library code to build/bdist.linux-x86_64/egg
running install_lib
running build_py
running build_ext
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py:813: UserWarning: The detected CUDA version (11.1) has a minor version mismatch with the version that was used to compile PyTorch (11.3). Most likely this shouldn't be a problem.
  warnings.warn(CUDA_MISMATCH_WARN.format(cuda_str_version, torch.version.cuda))
building 'tinycudann_bindings_86._C' extension
Emitting ninja build file /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/3] c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o
c++ -MMD -MF /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o.d -pthread -B /home/evsjtu2/miniconda3/envs/nerfstudio/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -O2 -isystem /home/evsjtu2/miniconda3/envs/nerfstudio/include -fPIC -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/build/temp.linux-x86_64-cpython-38/tinycudann/bindings.o -std=c++14 -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
In file included from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/ATen.h:9:0,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include/torch/all.h:8,
                 from /home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/extension.h:4,
                 from /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:34:
/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/ATen/Context.h:25:67: warning: type attributes ignored after type is already defined [-Wattributes]
 enum class TORCH_API Float32MatmulPrecision {HIGHEST, HIGH, MEDIUM};
                                                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<tcnn::cpp::Context, at::Tensor> Module::fwd(at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:108:35: error: converting to ‘std::tuple<tcnn::cpp::Context, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = tcnn::cpp::Context; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = tcnn::cpp::Context; _T2 = at::Tensor]’
   return { std::move(ctx), output };
                                   ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor> Module::bwd(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:169:34: error: converting to ‘std::tuple<at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple<_T1, _T2>::tuple(_U1&&, _U2&&) [with _U1 = at::Tensor&; _U2 = at::Tensor&; <template-parameter-2-3> = void; _T1 = at::Tensor; _T2 = at::Tensor]’
   return { dL_dinput, dL_dparams };
                                  ^
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp: In member function ‘std::tuple<at::Tensor, at::Tensor, at::Tensor> Module::bwd_bwd_input(const tcnn::cpp::Context&, at::Tensor, at::Tensor, at::Tensor, at::Tensor)’:
/home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/tinycudann/bindings.cpp:240:47: error: converting to ‘std::tuple<at::Tensor, at::Tensor, at::Tensor>’ from initializer list would use explicit constructor ‘constexpr std::tuple< <template-parameter-1-1> >::tuple(_UElements&& ...) [with _UElements = {at::Tensor&, at::Tensor&, at::Tensor&}; <template-parameter-2-2> = void; _Elements = {at::Tensor, at::Tensor, at::Tensor}]’
   return {dL_ddLdoutput, dL_dparams, dL_dinput};
                                               ^
[2/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/cutlass_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/cutlass_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

^@^@[3/3] /usr/local/cuda/bin/nvcc  -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/cutlass/tools/util/include -I/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/torch/csrc/api/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/TH -I/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/include/THC -I/usr/local/cuda/include -I/home/evsjtu2/miniconda3/envs/nerfstudio/include/python3.8 -c -c /home/evsjtu2/yetianxiang/tiny-cuda-nn/src/fully_fused_mlp.cu -o /home/evsjtu2/yetianxiang/tiny-cuda-nn/bindings/torch/src/fully_fused_mlp.o -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr --compiler-options ''"'"'-fPIC'"'"'' -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -Xcompiler=-mf16c -Xcompiler=-Wno-float-conversion -Xcompiler=-fno-strict-aliasing -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H '-DPYBIND11_COMPILER_TYPE="_gcc"' '-DPYBIND11_STDLIB="_libstdcpp"' '-DPYBIND11_BUILD_ABI="_cxxabi1011"' -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

/home/evsjtu2/yetianxiang/tiny-cuda-nn/dependencies/fmt/include/fmt/core.h(287): warning: unrecognized GCC pragma

ninja: build stopped: subcommand failed.
Traceback (most recent call last):
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1808, in _run_ninja_build
    subprocess.run(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/subprocess.py", line 516, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "setup.py", line 127, in <module>
    setup(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/__init__.py", line 87, in setup
    return distutils.core.setup(**attrs)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 185, in setup
    return run_commands(dist)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/core.py", line 201, in run_commands
    dist.run_commands()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 969, in run_commands
    self.run_command(cmd)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 74, in run
    self.do_egg_install()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install.py", line 123, in do_egg_install
    self.run_command('bdist_egg')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 165, in run
    cmd = self.call_command('install_lib', warn_dir=0)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/bdist_egg.py", line 151, in call_command
    self.run_command(cmdname)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/install_lib.py", line 11, in run
    self.build()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/install_lib.py", line 112, in build
    self.run_command('build_ext')
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/cmd.py", line 318, in run_command
    self.distribution.run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/dist.py", line 1208, in run_command
    super().run_command(command)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/dist.py", line 988, in run_command
    cmd_obj.run()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 84, in run
    _build_ext.run(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 346, in run
    self.build_extensions()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 765, in build_extensions
    build_ext.build_extensions(self)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 468, in build_extensions
    self._build_extensions_serial()
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 494, in _build_extensions_serial
    self.build_extension(ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/command/build_ext.py", line 246, in build_extension
    _build_ext.build_extension(self, ext)
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/setuptools/_distutils/command/build_ext.py", line 549, in build_extension
    objects = self.compiler.compile(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 586, in unix_wrap_ninja_compile
    _write_ninja_file_and_compile_objects(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1487, in _write_ninja_file_and_compile_objects
    _run_ninja_build(
  File "/home/evsjtu2/miniconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1824, in _run_ninja_build
    raise RuntimeError(message) from e
RuntimeError: Error compiling objects for extension

I have the same problem as you, it has been bothering me, do you solve it?

LiXinghui-666 avatar May 07 '23 15:05 LiXinghui-666

no, devs dont want to solve it

andzejsp avatar May 09 '23 10:05 andzejsp

I'm also trying to install it for nerfstudio in windows 10 and anaconda, I get the following errors when the following command reaches setup.py pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

here's part of my error:

ninja: build stopped: subcommand failed.
      Traceback (most recent call last):
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
          subprocess.run(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\subprocess.py", line 516, in run
          raise CalledProcessError(retcode, process.args,
      subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

      The above exception was the direct cause of the following exception:

      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\tabat\AppData\Local\Temp\pip-req-build-5hok1m9c\bindings/torch\setup.py", line 174, in <module>
          setup(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\__init__.py", line 87, in setup
          return distutils.core.setup(**attrs)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
          return run_commands(dist)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
          dist.run_commands()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
          self.run_command(cmd)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\wheel\bdist_wheel.py", line 325, in run
          self.run_command("build")
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build.py", line 132, in run
          self.run_command(cmd_name)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
          self.distribution.run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\dist.py", line 1208, in run_command
          super().run_command(command)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
          cmd_obj.run()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\command\build_ext.py", line 84, in run
          _build_ext.run(self)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 346, in run
          self.build_extensions()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 843, in build_extensions
          build_ext.build_extensions(self)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 468, in build_extensions
          self._build_extensions_serial()
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 494, in _build_extensions_serial
          self.build_extension(ext)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\command\build_ext.py", line 246, in build_extension
          _build_ext.build_extension(self, ext)
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\setuptools\_distutils\command\build_ext.py", line 549, in build_extension
          objects = self.compiler.compile(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 815, in win_wrap_ninja_compile
          _write_ninja_file_and_compile_objects(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1574, in _write_ninja_file_and_compile_objects
          _run_ninja_build(
        File "C:\Users\tabat\anaconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1909, in _run_ninja_build
          raise RuntimeError(message) from e
      RuntimeError: Error compiling objects for extension
      [end of output]

Would really appreciate if someone can help, I've been stuck for more than a week.

smtabatabaie avatar May 29 '23 17:05 smtabatabaie

i found the same problem when i used pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch i find: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-rb1bwfp5 Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-rb1bwfp5 Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit a77dc53ed770dd8ea6f78951d5febe175d0045e9 Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... done Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0f/58/854ce5aab0ff5c33d66e1341b0be42f0330797335011880f7fbd88449996/ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB) Building wheels for collected packages: tinycudann Building wheel for tinycudann (setup.py) ... error error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [153 lines of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tinycudann Running setup.py clean for tinycudann Failed to build tinycudann ERROR: Could not build wheels for tinycudann, which is required to install pyproject.toml-based projects

wtj-zhong avatar May 30 '23 07:05 wtj-zhong

Forget about it, devs dont care, they have secret unreleased package that they don't share.. waste of time

On Tue, May 30, 2023, 10:20 WangTianJiao @.***> wrote:

i found the same problem when i used pip install ninja git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch i find: pip install ninja git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple Collecting git+ https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch Cloning https://github.com/NVlabs/tiny-cuda-nn/ to /tmp/pip-req-build-rb1bwfp5 Running command git clone --filter=blob:none --quiet https://github.com/NVlabs/tiny-cuda-nn/ /tmp/pip-req-build-rb1bwfp5 Resolved https://github.com/NVlabs/tiny-cuda-nn/ to commit a77dc53 https://github.com/NVlabs/tiny-cuda-nn/commit/a77dc53ed770dd8ea6f78951d5febe175d0045e9 Running command git submodule update --init --recursive -q Preparing metadata (setup.py) ... done Collecting ninja Using cached https://pypi.tuna.tsinghua.edu.cn/packages/0f/58/854ce5aab0ff5c33d66e1341b0be42f0330797335011880f7fbd88449996/ninja-1.11.1-py2.py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (145 kB) Building wheels for collected packages: tinycudann Building wheel for tinycudann (setup.py) ... error error: subprocess-exited-with-error

× python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [153 lines of output]

note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for tinycudann Running setup.py clean for tinycudann Failed to build tinycudann ERROR: Could not build wheels for tinycudann, which is required to install pyproject.toml-based projects

— Reply to this email directly, view it on GitHub https://github.com/NVlabs/tiny-cuda-nn/issues/208#issuecomment-1567898986, or unsubscribe https://github.com/notifications/unsubscribe-auth/ATFPK6ZA5GRDSRSN5L4R3S3XIWNUPANCNFSM6AAAAAASO7ONAQ . You are receiving this because you commented.Message ID: @.***>

andzejsp avatar May 30 '23 07:05 andzejsp

I'll switch from windows to try my chances with Ubuntu

smtabatabaie avatar May 30 '23 08:05 smtabatabaie

What I ended up doing - if it helps anyone - is the following. Some personal spec information:

OS: Windows Graphics Card: A6000 Command Prompt: Anaconda

In case it's not already installed, make sure you run:

conda install git

(so you can install git repos from command prompts)

I was getting some misleading errors about PATH variables, so ran: conda install -c conda-forge cudatoolkit-dev

Added system environment variable TCNN_CUDA_ARCHITECTURES and value was based on this table here: https://developer.nvidia.com/cuda-gpus

(ex: I have a A6000, so I have a value of 86 - remember to take out the decimal)

After this, ran the command in the install instructions: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Build goes as successful! So to test, running the nerfacto test - and it runs! I hope this helps anyone with similar issues.

lindseysMT avatar May 31 '23 17:05 lindseysMT

@lindseysMT when you mentioned: Added system environment variable TCNN_CUDA_ARCHITECTURES

How to do this? In my case its 89 but how to do this step?

Tobe2d avatar May 31 '23 18:05 Tobe2d

@Tobe2d

  1. Go to your system properties
  2. Go to Environment Variables
  3. Under "System variables" click "New"
  4. Fill out like attached image
  5. Click OK

Should be good to go! Capture

lindseysMT avatar May 31 '23 18:05 lindseysMT

@lindseysMT Thank you so much!

Tobe2d avatar May 31 '23 18:05 Tobe2d

What I ended up doing - if it helps anyone - is the following. Some personal spec information:

OS: Windows Graphics Card: A6000 Command Prompt: Anaconda

In case it's not already installed, make sure you run:

conda install git

(so you can install git repos from command prompts)

I was getting some misleading errors about PATH variables, so ran: conda install -c conda-forge cudatoolkit-dev

Added system environment variable TCNN_CUDA_ARCHITECTURES and value was based on this table here: https://developer.nvidia.com/cuda-gpus

(ex: I have a A6000, so I have a value of 86 - remember to take out the decimal)

After this, ran the command in the install instructions: pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

Build goes as successful! So to test, running the nerfacto test - and it runs! I hope this helps anyone with similar issues.

Mine still fails with the same errors, but I could install and run nerfstudio without problems in Ubuntu

smtabatabaie avatar Jun 04 '23 15:06 smtabatabaie

@smtabatabaie I'm glad it works with Ubuntu - from my PATH errors, it seemed to be looking for a system variable that the cudatoolkit added, which cleared up on my end.

lindseysMT avatar Jun 05 '23 18:06 lindseysMT

I think I solved it after installing vs2022 build tools and making sure vs2019 build tools was uninstalled.

Some other things I also did which may have contributed: Fixed my path variables C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp Added TCNN_CUDA_ARCHITECTURES variables and installed cudatoolkit-dev (as mentioned my @Tobe2d) Installed Ninja through conda

Askejm avatar Jun 11 '23 11:06 Askejm

Ok, I've followed all these extra steps after a failed execution of pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

  1. Uninstalled all the build tools which are not vs2022.
  2. as per @Tobe2d, I added the environment variable TCNN_CUDA_ARCHITECTURES
  3. I made sure that ...\v11.8\bin and ...\v11.8\lbnvvp were in the PATH variable
  4. I've modified the CUDA_HOME environment variable to C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
  • I had previous cuda installation
  1. Reran the pip install ninja git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch command

And it worked.

nbourre avatar Jun 15 '23 20:06 nbourre

Sadly none of the above worked for me. For some reason, nvcc and cl compilers couldn't access my PATH variable. My fix was to edit the setup.py and deliver all necessary includes and libs as compiler flags:

	base_cflags = ["/std:c++14",
	r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\ucrt',
	r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\shared',
	r'-IC:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\um',
	r'-IC:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include',
	]
base_nvcc_flags = [
	r"-I C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include",
	r"-I C:\Program Files (x86)\Windows Kits\10\Include\10.0.20348.0\ucrt",
	r"-I C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\include",
	"-std=c++14",
	"--extended-lambda",
	"--expt-relaxed-constexpr",
	# The following definitions must be undefined
	# since TCNN requires half-precision operation.
	"-U__CUDA_NO_HALF_OPERATORS__",
	"-U__CUDA_NO_HALF_CONVERSIONS__",
	"-U__CUDA_NO_HALF2_OPERATORS__",
]
link_flags = [r'C:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.36.32532\lib\x64',
r'C:\Program Files (x86)\Windows Kits\10\Lib\10.0.20348.0\um\x64',
r'C:\Program Files (x86)\Windows Kits\10\Lib\10.0.20348.0\ucrt\x64',
]
def make_extension(compute_capability):
	nvcc_flags = base_nvcc_flags + [f"-gencode=arch=compute_{compute_capability},code={code}_{compute_capability}" for code in ["compute", "sm"]]
	definitions = base_definitions + [f"-DTCNN_MIN_GPU_ARCH={compute_capability}"]

	if include_networks and compute_capability > 70:
		source_files = base_source_files + ["../../src/fully_fused_mlp.cu"]
	else:
		source_files = base_source_files

	nvcc_flags = nvcc_flags + definitions
	cflags = base_cflags + definitions

	ext = CUDAExtension(
		name=f"tinycudann_bindings._{compute_capability}_C",
		sources=source_files,
		include_dirs=[
			"%s/include" % root_dir,
			"%s/dependencies" % root_dir,
			"%s/dependencies/cutlass/include" % root_dir,
			"%s/dependencies/cutlass/tools/util/include" % root_dir,
			"%s/dependencies/fmt/include" % root_dir,
		],
		extra_compile_args={"cxx": cflags, "nvcc": nvcc_flags},
		libraries=["cuda", ],
		library_dirs=link_flags,
	)
	return ext

This might not be a beautiful fix and there is probably an easier way to collect these paths but it worked for me. Note that link_flags was created by me while cflags and nvcc_flags were only extended. Remember to adjust the paths to your corresponding visual studio version.

acecross avatar Jul 01 '23 11:07 acecross

I solved this by:

  1. Completely uninstall VS 2022 and CUDA.
  2. Install VS 2019.
  3. Install CUDA. Then install tinycuda.

Turmac avatar Jul 08 '23 06:07 Turmac

this is retarded that you have to pretty much nuke your system just to use this. couldnt they have built this in virtual env? maybe conda? and build binaries in the env?

it saddens me that even tho this much time have passed, people still struggle with this.

andzejsp avatar Jul 08 '23 08:07 andzejsp

I've also stumbled on the same issue: can't seem seem to find <cassert> and <crtdefs>.

I did follow advices from above:

  • I've updated to the latest version of VS2022
  • I've (re)installed CUDA 11.8 after installing VS2022
  • I've added the latest version of the MSVC build tools in the PATH environment variable.
 where cl
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64\cl.exe

This is are the errors I experience using pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

...
  C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\c10/macros/Macros.h(3): fatal error C1083: Cannot open include file: 'cassert': No such file or directory
  C:\Users\george.profenza\AppData\Local\Temp\pip-req-build-p9s3io0c/dependencies/fmt/include\fmt/os.h(11): fatal error C1083: Cannot open include file: 'cerrno': No such file or directory
  C:\Users\george.profenza\AppData\Local\Temp\pip-req-build-p9s3io0c/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
  C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory
  ninja: build stopped: subcommand failed.
  Traceback (most recent call last):
    File "C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\utils\cpp_extension.py", line 1893, in _run_ninja_build
      subprocess.run(
    File "C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\subprocess.py", line 516, in run
      raise CalledProcessError(retcode, process.args,
  subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

This is a conda environment with CUDA 11.8 and PyTorch 2.0.1+cu118 installed.

I've also tried cloning the repo recursively to ensure I'm using the right commits (as per @mints7 's advice):

  • /tiny-cuda-nn/dependencies/cutlass ((1eb63551...))
  • tiny-cuda-nn/dependencies/fmt ((b0c8263c...))

These match the latest commit from the main branch, so unsurprisingly I'm seeing the same errors:

...
C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\c10/macros/Macros.h(3): fatal error C1083: Cannot open include file: 'cassert': No such file or directory
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/os.h(11): fatal error C1083: Cannot open include file: 'cerrno': No such file or directory
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include\crt/host_config.h(231): fatal error C1083: Cannot open include file: 'crtdefs.h': No such file or directory
encoding.cu

I did double check I have the "ingredients":

  • PATH includes the latest version of MSVC build tools
  • the MSVC build tools contain both includes that the pip package install can't find
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\libnvvp
C:\Windows\System32
C:\Windows
C:\Windows\System32\wbem
C:\Windows\System32\OpenSSH
C:\Program Files\NVIDIA Corporation\Nsight Compute 2022.3.0\
C:\Program Files\dotnet\
C:\Users\george.profenza\AppData\Local\Programs\Microsoft VS Code\bin
C:\Users\george.profenza\.pyenv\pyenv-win\bin
C:\Program Files\Git\bin
C:\Users\george.profenza\AppData\Local\GitHubDesktop\bin
C:\Users\george.profenza\AppData\Local\Microsoft\WindowsApps
C:\Program Files\ImageMagick-7.1.0-Q16-HDRI
C:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\condabin
C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64
C:\Program Files\CMake\bin
C:\COLMAP-3.8-windows-cuda
C:\Users\george.profenza\.dotnet\tools

2023-09-03 10_36_50-cassert - Everything 2023-09-03 10_37_47-crtdef - Everything

@Tom94 I can imagine you and your team must be super busy and I appreciate sharing all this wonderful code with ready to go samples. However, I could use a few hints/tips/RTFM links/etc. to get over this hump. Any hints on what I might be missing ? (Tweaking the scripts to add explicit paths to the headers feels hacky and I thought I'd double check)

Thank you so much, George

GeorgeProfenzaD3 avatar Sep 03 '23 17:09 GeorgeProfenzaD3

@Askejm Can you please elaborate on your opinion above ? (Maybe my outputs are too verbose ? 🤷 😅 ) Perhaps I'm missing something ? I've tried your suggestions (CUDA 11.8 libnvvp and bin folder are added to PATH and TCNN_CUDA_ARCHITECTURES is set (to 86 in my case) in System environment variables).

I found this comment: disabling ninja gets me to the algorithm errors others experienced:

python setup.py build
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Wed_Sep_21_10:41:10_Pacific_Daylight_Time_2022
Cuda compilation tools, release 11.8, V11.8.89
Build cuda_11.8.r11.8/compiler.31833905_0
Building PyTorch extension for tiny-cuda-nn version 1.7
Obtained compute capabilities [86] from environment variable TCNN_CUDA_ARCHITECTURES
Detected CUDA version 11.8
Targeting C++ standard 17
running build
running build_py
running egg_info
writing tinycudann.egg-info\PKG-INFO
writing dependency_links to tinycudann.egg-info\dependency_links.txt
writing top-level names to tinycudann.egg-info\top_level.txt
reading manifest file 'tinycudann.egg-info\SOURCES.txt'
writing manifest file 'tinycudann.egg-info\SOURCES.txt'
running build_ext
building 'tinycudann_bindings._86_C' extension
"C:\Program Files\Microsoft Visual Studio\2022\Professional\VC\Tools\MSVC\14.37.32822\bin\Hostx64\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -IC:\tiny-cuda-nn/include -IC:\tiny-cuda-nn/dependencies -IC:\tiny-cuda-nn/dependencies/cutlass/include -IC:\tiny-cuda-nn/dependencies/cutlass/tools/util/include -IC:\tiny-cuda-nn/dependencies/fmt/include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\TH -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\include" -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\include -IC:\Users\george.profenza\.pyenv\pyenv-win\versions\miniconda3\envs\nerfstudio\Include /EHsc /Tp../../dependencies/fmt/src/format.cc /Fobuild\temp.win-amd64-cpython-38\Release\../../dependencies/fmt/src/format.obj /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc /std:c++17 -DTCNN_PARAMS_UNALIGNED -DTCNN_MIN_GPU_ARCH=86 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_86_C -D_GLIBCXX_USE_CXX11_ABI=0
format.cc
C:\tiny-cuda-nn/dependencies/fmt/include\fmt/format-inl.h(11): fatal error C1083: Cannot open include file: 'algorithm': No such file or directory
setup.py:5: DeprecationWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html
  from pkg_resources import parse_version
error: command 'C:\\Program Files\\Microsoft Visual Studio\\2022\\Professional\\VC\\Tools\\MSVC\\14.37.32822\\bin\\Hostx64\\x64\\cl.exe' failed with exit code 2
(nerfstudio)

(It's unclear to me which algorithm this refers too (std / boost / absl / etc.) )

GeorgeProfenzaD3 avatar Sep 03 '23 20:09 GeorgeProfenzaD3