tiny-cuda-nn icon indicating copy to clipboard operation
tiny-cuda-nn copied to clipboard

Error while installing python extension

Open bacTlink opened this issue 1 year ago • 5 comments

Windows 10 Visual Studio 2019 version 16.11.17 Anaconda 3, python 3.9.12 CUDA: 11.6 torch: 1.12.0+cu116 cmake 3.22.0-rc2 GPU: RTX 6000

The tiny-cuda-nn itself compiled normally, but the pytorch extension failed to build.

When running python setup.py install, error was reported in format.h:

Building PyTorch extension for tiny-cuda-nn version 1.6
Targeting compute capability 75
running install
running bdist_egg
running egg_info
writing tinycudann.egg-info\PKG-INFO
writing dependency_links to tinycudann.egg-info\dependency_links.txt
writing top-level names to tinycudann.egg-info\top_level.txt
reading manifest file 'tinycudann.egg-info\SOURCES.txt'
writing manifest file 'tinycudann.egg-info\SOURCES.txt'
installing library code to build\bdist.win-amd64\egg
running install_lib
running build_py
running build_ext
building 'tinycudann_bindings._C' extension
Emitting ninja build file E:\Downloads\tiny-cuda-nn\bindings\torch\build\temp.win-amd64-3.9\Release\build.ninja...
Compiling objects...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
[1/7] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc --generate-dependencies-with-compile --dependency-output E:\Downloads\tiny-cuda-nn\bindings\torch\build\src/common.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IE:\Downloads\tiny-cuda-nn/include -IE:\Downloads\tiny-cuda-nn/dependencies -IE:\Downloads\tiny-cuda-nn/dependencies/cutlass/include -IE:\Downloads\tiny-cuda-nn/dependencies/cutlass/tools/util/include -IE:\Downloads\tiny-cuda-nn/dependencies/fmt/include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\torch\csrc\api\include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\TH -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" -IE:\Anaconda3\envs\dmodel\include -IE:\Anaconda3\envs\dmodel\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c E:\Downloads\tiny-cuda-nn\src\common.cu -o E:\Downloads\tiny-cuda-nn\bindings\torch\build\src/common.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -DTCNN_MIN_GPU_ARCH=75 -DFMT_HEADER_ONLY=1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
FAILED: E:/Downloads/tiny-cuda-nn/bindings/torch/build/src/common.obj 
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc --generate-dependencies-with-compile --dependency-output E:\Downloads\tiny-cuda-nn\bindings\torch\build\src/common.obj.d --use-local-env -Xcompiler /MD -Xcompiler /wd4819 -Xcompiler /wd4251 -Xcompiler /wd4244 -Xcompiler /wd4267 -Xcompiler /wd4275 -Xcompiler /wd4018 -Xcompiler /wd4190 -Xcompiler /EHsc -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -IE:\Downloads\tiny-cuda-nn/include -IE:\Downloads\tiny-cuda-nn/dependencies -IE:\Downloads\tiny-cuda-nn/dependencies/cutlass/include -IE:\Downloads\tiny-cuda-nn/dependencies/cutlass/tools/util/include -IE:\Downloads\tiny-cuda-nn/dependencies/fmt/include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\torch\csrc\api\include -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\TH -IE:\Anaconda3\envs\dmodel\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" -IE:\Anaconda3\envs\dmodel\include -IE:\Anaconda3\envs\dmodel\Include "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" -c E:\Downloads\tiny-cuda-nn\src\common.cu -o E:\Downloads\tiny-cuda-nn\bindings\torch\build\src/common.obj -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -std=c++14 --extended-lambda --expt-relaxed-constexpr -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 -DTCNN_MIN_GPU_ARCH=75 -DFMT_HEADER_ONLY=1 -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=_C -D_GLIBCXX_USE_CXX11_ABI=0
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF_OPERATORS__”(using“/U__CUDA_NO_HALF_OPERATORS__”)
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF_CONVERSIONS__”(using“/U__CUDA_NO_HALF_CONVERSIONS__”)
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF2_OPERATORS__”(using“/U__CUDA_NO_HALF2_OPERATORS__”)
common.cu
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF_OPERATORS__”(using“/U__CUDA_NO_HALF_OPERATORS__”)
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF_CONVERSIONS__”(using“/U__CUDA_NO_HALF_CONVERSIONS__”)
cl: cmd warning D9025 :rewritting“/D__CUDA_NO_HALF2_OPERATORS__”(using“/U__CUDA_NO_HALF2_OPERATORS__”)
common.cu
E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format.h(2478): error: too many recursive substitutions of function template signatures
          detected during:
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            [ 397 instantiation contexts not shown ]
            instantiation of "auto fmt::v9::detail::write(OutputIt, T, fmt::v9::basic_format_specs<Char>, fmt::v9::detail::locale_ref)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3217): here
            instantiation of "auto fmt::v9::detail::write<Char,OutputIt,T,<unnamed>>(OutputIt, T)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3351): here
            instantiation of "auto fmt::v9::detail::default_arg_formatter<Char>::operator()(T)->fmt::v9::detail::default_arg_formatter<Char>::iterator [with Char=char, T=float]" 
E:/Downloads/tiny-cuda-nn/dependencies/fmt/include\fmt/core.h(1644): here
            instantiation of "auto fmt::v9::visit_format_arg(Visitor &&, const fmt::v9::basic_format_arg<Context> &)->decltype((<expression>)) [with Visitor=fmt::v9::detail::default_arg_formatter<char>, Context=fmt::v9::format_context]" 
(4055): here
            instantiation of "void fmt::v9::detail::vformat_to(fmt::v9::detail::buffer<Char> &, fmt::v9::basic_string_view<Char>, fmt::v9::basic_format_args<fmt::v9::basic_format_context<fmt::v9::detail::buffer_appender<fmt::v9::type_identity_t<Char>>, fmt::v9::type_identity_t<Char>>>, fmt::v9::detail::locale_ref) [with Char=char]" 
E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format-inl.h(1472): here

E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format.h(2475): error: duplicate base class name
          detected during:
            instantiation of class "fmt::v9::detail::has_isfinite<T, Enable> [with T=float, Enable=void]" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            [ 395 instantiation contexts not shown ]
            instantiation of "auto fmt::v9::detail::write(OutputIt, T, fmt::v9::basic_format_specs<Char>, fmt::v9::detail::locale_ref)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3217): here
            instantiation of "auto fmt::v9::detail::write<Char,OutputIt,T,<unnamed>>(OutputIt, T)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3351): here
            instantiation of "auto fmt::v9::detail::default_arg_formatter<Char>::operator()(T)->fmt::v9::detail::default_arg_formatter<Char>::iterator [with Char=char, T=float]" 
E:/Downloads/tiny-cuda-nn/dependencies/fmt/include\fmt/core.h(1644): here
            instantiation of "auto fmt::v9::visit_format_arg(Visitor &&, const fmt::v9::basic_format_arg<Context> &)->decltype((<expression>)) [with Visitor=fmt::v9::detail::default_arg_formatter<char>, Context=fmt::v9::format_context]" 
(4055): here
            instantiation of "void fmt::v9::detail::vformat_to(fmt::v9::detail::buffer<Char> &, fmt::v9::basic_string_view<Char>, fmt::v9::basic_format_args<fmt::v9::basic_format_context<fmt::v9::detail::buffer_appender<fmt::v9::type_identity_t<Char>>, fmt::v9::type_identity_t<Char>>>, fmt::v9::detail::locale_ref) [with Char=char]" 
E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format-inl.h(1472): here

E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format.h(2475): error: duplicate base class name
          detected during:
            instantiation of class "fmt::v9::detail::has_isfinite<T, Enable> [with T=float, Enable=void]" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            instantiation of "fmt::v9::detail::isfinite" 
(3177): here
            processing of template argument list for "fmt::v9::detail::has_isfinite" 
(3177): here
            [ 393 instantiation contexts not shown ]
            instantiation of "auto fmt::v9::detail::write(OutputIt, T, fmt::v9::basic_format_specs<Char>, fmt::v9::detail::locale_ref)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3217): here
            instantiation of "auto fmt::v9::detail::write<Char,OutputIt,T,<unnamed>>(OutputIt, T)->OutputIt [with Char=char, OutputIt=fmt::v9::appender, T=float, <unnamed>=0]" 
(3351): here
            instantiation of "auto fmt::v9::detail::default_arg_formatter<Char>::operator()(T)->fmt::v9::detail::default_arg_formatter<Char>::iterator [with Char=char, T=float]" 
E:/Downloads/tiny-cuda-nn/dependencies/fmt/include\fmt/core.h(1644): here
            instantiation of "auto fmt::v9::visit_format_arg(Visitor &&, const fmt::v9::basic_format_arg<Context> &)->decltype((<expression>)) [with Visitor=fmt::v9::detail::default_arg_formatter<char>, Context=fmt::v9::format_context]" 
(4055): here
            instantiation of "void fmt::v9::detail::vformat_to(fmt::v9::detail::buffer<Char> &, fmt::v9::basic_string_view<Char>, fmt::v9::basic_format_args<fmt::v9::basic_format_context<fmt::v9::detail::buffer_appender<fmt::v9::type_identity_t<Char>>, fmt::v9::type_identity_t<Char>>>, fmt::v9::detail::locale_ref) [with Char=char]" 
E:\Downloads\tiny-cuda-nn\dependencies\fmt\include\fmt\format-inl.h(1472): here
...
More errors
...

Error limit reached.
100 errors detected in the compilation of "E:/Downloads/tiny-cuda-nn/src/fully_fused_mlp.cu".
Compilation terminated.
fully_fused_mlp.cu
ninja: build stopped: subcommand failed.

bacTlink avatar Aug 01 '22 03:08 bacTlink

I have the same issue.

Windows 11 Visual Studio 2022, Version 17.1.3 CUDA V11.6.124 PyTorch v1.12.0 (py3.10_cuda11.6_cudnn8_0) CMake 3.23.0 GPU: RTX 3090

Using commit 466aa1c51bac20179c61331bf4e5af4373623c2e allow me to temporary bypass this problem.

wilsonCernWq avatar Aug 01 '22 05:08 wilsonCernWq

I have the same issue.

Windows 11 Visual Studio 2022, Version 17.1.3 CUDA V11.6.124 PyTorch v1.12.0 (py3.10_cuda11.6_cudnn8_0) CMake 3.23.0 GPU: RTX 3090

Using commit 466aa1c allow me to temporary bypass this problem.

what does this mean?

liang3588 avatar Aug 02 '22 06:08 liang3588

I have the same problem........

liang3588 avatar Aug 02 '22 07:08 liang3588

I have the same issue. Windows 11 Visual Studio 2022, Version 17.1.3 CUDA V11.6.124 PyTorch v1.12.0 (py3.10_cuda11.6_cudnn8_0) CMake 3.23.0 GPU: RTX 3090 Using commit 466aa1c allow me to temporary bypass this problem.

what does this mean?

I did my installation by cloning the repository, checkout to the commit and install directly via python the script:

git clone https://github.com/NVlabs/tiny-cuda-nn.git
cd tiny-cuda-nn
git checkout 466aa1c
cd bindings/torch
python setup.py install

wilsonCernWq avatar Aug 02 '22 16:08 wilsonCernWq

I have the same issue. Windows 11 Visual Studio 2022, Version 17.1.3 CUDA V11.6.124 PyTorch v1.12.0 (py3.10_cuda11.6_cudnn8_0) CMake 3.23.0 GPU: RTX 3090 Using commit 466aa1c allow me to temporary bypass this problem.

what does this mean?

I did my installation by cloning the repository, checkout to the commit and install directly via python the script:

git clone https://github.com/NVlabs/tiny-cuda-nn.git
cd tiny-cuda-nn
git checkout 466aa1c
cd bindings/torch
python setup.py install

I installed it the way you did, thank you so much, it is amazing!

liang3588 avatar Aug 03 '22 01:08 liang3588

I finally fixed that by

git clone --recursive https://github.com/NVlabs/tiny-cuda-nn.git
cd tiny-cuda-nn
git checkout 466aa1c
cd bindings/torch
python setup.py install

seems git without --recursive may cause uncompleted download.

shengyu-meng avatar Aug 20 '22 13:08 shengyu-meng

Fixed on latest master

Tom94 avatar Aug 23 '22 09:08 Tom94