DiffDVR
DiffDVR copied to clipboard
CUDA version
Hi Sebastian,I would like to ask if CUDA must use 11.0?Below is my error。 errorMSB3721 order“"D:\program files\CUDA\Development\bin\nvcc.exe" --use-local-env -ccbin "D:\program files\visual studio\visual studio 2022\VC\Tools\MSVC\14.36.32532\bin\HostX64\x64" -x cu -ID:\Wxr\render\differentiable\DiffDVR\renderer -ID:\Anaconda3\include -I"D:\program files\vcpkg\vcpkg\packages\glm_x64-windows\include" -I"D:\Anaconda3\Lib\site-packages\torch\include" -I"D:\Anaconda3\Lib\site-packages\torch\include\torch\csrc\api\include" -I"D:\Wxr\render\differentiable\DiffDVR\third-party\cuMat" -I"D:\Wxr\render\differentiable\DiffDVR\third-party\magic_enum\include" -I"D:\Wxr\render\differentiable\DiffDVR\third-party\cudad\include\cudAD" -I"D:\Wxr\render\differentiable\DiffDVR\third-party\tinyformat" -I"D:\Wxr\render\differentiable\DiffDVR\third-party\lz4\lib" -I"D:\program files\CUDA\Development\include" -I"D:\program files\CUDA\Development\include" --keep-dir x64\Debug -use_fast_math -maxrregcount=0 --machine 64 --compile -cudart static -gencode=arch=compute_86,code=sm_86 -std=c++17 --generate-line-info --expt-relaxed-constexpr --extended-lambda –Xptxas -v -gencode arch=compute_86,code=sm_86 -Xcudafe --diag_suppress=cc_clobber_ignored,--diag_suppress=integer_sign_change,--diag_suppress=useless_using_declaration,--diag_suppress=set_but_not_used,--diag_suppress=field_without_dll_interface,--diag_suppress=base_class_has_different_dll_interface,--diag_suppress=dll_interface_conflict_none_assumed,--diag_suppress=dll_interface_conflict_dllexport_assumed,--diag_suppress=implicit_return_from_non_void_function,--diag_suppress=unsigned_compare_with_zero,--diag_suppress=declared_but_not_referenced,--diag_suppress=bad_friend_decl --Werror cross-execution-space-call --no-host-device-move-forward --expt-relaxed-constexpr --expt-extended-lambda -std=c++17 --generate-code=arch=compute_61,code=[compute_61,sm_61] --generate-code=arch=compute_72,code=[compute_72,sm_72] -gencode=arch=compute_86,code=sm_86 -std=c++17 --generate-line-info --expt-relaxed-constexpr --extended-lambda –Xptxas -v -Xcompiler="/EHsc -Zi -Ob0" -g -D_WINDOWS -DONNX_NAMESPACE=onnx_c2 -D"RENDERER_SHADER_DIRS={"D:/Wxr/render/differentiable/DiffDVR/renderer","D:/Wxr/render/differentiable/DiffDVR/third-party/cudad/include/cudAD",}" -DRENDERER_RUNTIME_COMPILATION=1 -D"NVCC_ARGS="arch=compute_86,code=sm_86"" -D"NVCC_INCLUDE_DIR=D:/program files/CUDA/Development/include" -DRENDERER_BUILD_CPU_KERNELS=1 -DUSE_DOUBLE_PRECISION=0 -DNOMINMAX -DCUMAT_SINGLE_THREAD_CONTEXT=1 -DTHRUST_IGNORE_CUB_VERSION_CHECK=1 -D"CMAKE_INTDIR="Debug"" -D_MBCS -DWIN32 -D_WINDOWS -D"RENDERER_SHADER_DIRS={"D:/Wxr/render/differentiable/DiffDVR/renderer","D:/Wxr/render/differentiable/DiffDVR/third-party/cudad/include/cudAD",}" -DRENDERER_RUNTIME_COMPILATION=1 -D"NVCC_ARGS="arch=compute_86,code=sm_86"" -D"NVCC_INCLUDE_DIR=D:/program files/CUDA/Development/include" -DRENDERER_BUILD_CPU_KERNELS=1 -DUSE_DOUBLE_PRECISION=0 -DNOMINMAX -DCUMAT_SINGLE_THREAD_CONTEXT=1 -DTHRUST_IGNORE_CUB_VERSION_CHECK=1 -D"CMAKE_INTDIR="Debug"" -Xcompiler "/EHsc /W1 /nologo /Od /FdD:\Wxr\render\differentiable\DiffDVR\build\renderer\Debug\Renderer.pdb /FS /Zi /RTC1 /MDd /GR" -o Renderer.dir\Debug\renderer_cuda_impl.obj "D:\Wxr\render\differentiable\DiffDVR\renderer\renderer_cuda_impl.cu"”已退出,返回代码为 1。 Renderer D:\program files\visual studio\visual studio 2022\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 11.7.targets 790
Hi, it is very likely that it might not work with a too recent CUDA version out of the box. The changes required, however, should not be too drastic. Just keep in mind that the CUDA version used to compile the library and for PyTorch must match.
The message above does not actually contain the error message, these are only the compiler arguments. I see that you are using CUDA 11.7, but not the actual error.
Thanks for your answer. Yes, actually CUDA11.7 matches my pytorch. However, there are still many errors when building in visual studio. I am going to reconfigure the environment in Linux.
Closing due to inactivity