DirectVoxGO icon indicating copy to clipboard operation
DirectVoxGO copied to clipboard

cuda extension error

Open hanxuel opened this issue 3 years ago • 15 comments

Hi, thank you for providing the new version. However, when I first test the code, I meet the follow error. Do you have suggestions on solving the problem? My environment is torch 1.8.1 cuda10.2 python3.7.4

python run.py --config configs/nerf/chair.py --render_test

Using /home/hl589/.cache/torch_extensions as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/hl589/.cache/torch_extensions/adam_upd_cuda/build.ninja... Building extension module adam_upd_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] :/usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output adam_upd_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=adam_upd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/TH -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++14 -c /home/hl589/DirectVoxGO_new/lib/cuda/adam_upd_kernel.cu -o adam_upd_kernel.cuda.o FAILED: adam_upd_kernel.cuda.o :/usr/local/cuda/bin/nvcc --generate-dependencies-with-compile --dependency-output adam_upd_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=adam_upd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE="gcc" -DPYBIND11_STDLIB="libstdcpp" -DPYBIND11_BUILD_ABI="cxxabi1011" -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/torch/csrc/api/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/TH -isystem /home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/include/THC -isystem :/usr/local/cuda/include -isystem /home/hl589/.conda/envs/Directvoxgo_new/include/python3.7m -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS -D__CUDA_NO_HALF_CONVERSIONS_ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -std=c++14 -c /home/hl589/DirectVoxGO_new/lib/cuda/adam_upd_kernel.cu -o adam_upd_kernel.cuda.o /bin/sh: 1: :/usr/local/cuda/bin/nvcc: not found ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1673, in _run_ninja_build env=env) File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/subprocess.py", line 487, in run output=stdout, stderr=stderr) subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "run.py", line 13, in from lib import utils, dvgo, dmpigo File "/home/hl589/DirectVoxGO_new/lib/utils.py", line 11, in from .masked_adam import MaskedAdam File "/home/hl589/DirectVoxGO_new/lib/masked_adam.py", line 10, in verbose=True) File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1091, in load keep_intermediates=keep_intermediates) File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1302, in _jit_compile is_standalone=is_standalone) File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1407, in _write_ninja_file_and_build_library error_prefix=f"Error building extension '{name}'") File "/home/hl589/.conda/envs/Directvoxgo_new/lib/python3.7/site-packages/torch/utils/cpp_extension.py", line 1683, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'adam_upd_cuda'

hanxuel avatar Feb 25 '22 09:02 hanxuel

I have the same error, except with FAILED: render_utils_kernel.cuda.o.

System: Windows 10 CUDA: 11.3 torch: 1.8.2+cu111 using x64 cl.exe from visual studio 17

The main error seems to be

C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: identifier "pow<double, float, void> " is undefined in device code
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_backward_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: identifier "pow<double, float, void> " is undefined in device code
4 errors detected in the compilation of "C:/Files/repos/DirectVoxGO/lib/cuda/render_utils_kernel.cu".

so pow() from lines 368 & 404 seems to be a __host__ function (i.e CPU? i don't know) while raw2alpha_cuda_kernel and raw2alpha_backward_cuda_kernel are defined as __global__ functions. Where exactly is pow defined?

PNeigel avatar Mar 17 '22 11:03 PNeigel

Okay, I solved my problem the following way:

pow(double, double) is indeed defined in CUDA as a __device__ function (see https://docs.nvidia.com/cuda/cuda-math-api/group__CUDA__MATH__DOUBLE.html#group__CUDA__MATH__DOUBLE), so it should work in theory to call it from a global function. But looking at the error that I got in my previous post, somehow the function calls in lines 368 and 404 of render_utils_kernel.cu were interpreted as pow(double, float, void), which is apparently not defined as a __device__ function. I solved it by explicitly casting the parameters do double:

FILE: render_utils_kernel.cu
(line 368) [...] pow(1 + e, -interval);
→
(line 368 NEW) [...] pow((double)(1 + e), (double)(-interval));
FILE: render_utils_kernel.cu
(line 404) [...] pow(1+exp_d[i_pt], -interval-1) [...]
→
(line 404 NEW) [...] pow((double)(1+exp_d[i_pt]), (double)(-interval-1)) [...]

Now render_utils_kernel.cu compiles fine with nvcc and I can run DirectVoxGO.

As for @hanxuel 's problem: It seems that the nvcc binary from CUDA is not in your PATH: /bin/sh: 1: :/usr/local/cuda/bin/nvcc: not found

PNeigel avatar Mar 17 '22 12:03 PNeigel

Thanks a lot for your comments, do you know how to solve this issue? /bin/sh: 1: :/usr/local/cuda/bin/nvcc: not found

hanxuel avatar Mar 17 '22 16:03 hanxuel

First of all you obviously need to have some version of CUDA installed. Then the bin directory of your CUDA installation should be in your PATH, e.g. see https://stackoverflow.com/a/68238040

PNeigel avatar Mar 17 '22 16:03 PNeigel

/bin/sh: 1: :/usr/local/cuda/bin/nvcc: not found

Thank you, following this instruction, nvcc not found issue is solved. But here comes new errors, it seems the key error is:

/home/hl589/DirectVoxGO_new/lib/cuda/adam_upd.cpp: In function ‘void adam_upd(at::Tensor, at::Tensor, at::Tensor, at::Tensor, int, float, float, float, float)’: /home/hl589/DirectVoxGO_new/lib/cuda/adam_upd.cpp:32:42: warning: ‘at::DeprecatedTypeProperties& at::Tensor::type() const’ is deprecated: Tensor.type() is deprecated. Instead use Tensor.options(), which in many cases (e.g. in a constructor) is a drop-in replacement. If you were using data from type(), that is now available from Tensor itself, so instead of tensor.type().scalar_type(), use tensor.scalar_type() instead and instead of tensor.type().backend() use tensor.device(). [-Wdeprecated-declarations] 32 | #define CHECK_CUDA(x) TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor") | ^ /home/hl589/DirectVoxGO_new/lib/cuda/adam_upd.cpp:34:24: note: in expansion of macro ‘CHECK_CUDA’ 34 | #define CHECK_INPUT(x) CHECK_CUDA(x); CHECK_CONTIGUOUS(x) | ^~~~~~~~~~ /home/hl589/DirectVoxGO_new/lib/cuda/adam_upd.cpp:56:3: note: in expansion of macro ‘CHECK_INPUT’ 56 | CHECK_INPUT(param);

and also: from torch_scatter import segment_coo ModuleNotFoundError: No module named 'torch_scatter'

hanxuel avatar Mar 17 '22 16:03 hanxuel

try pip install torch_scatter (which is missing from requirements.txt btw @sunset1995 )

PNeigel avatar Mar 17 '22 16:03 PNeigel

try pip install torch_scatter (which is missing from requirements.txt btw @sunset1995 )

Okay! I thought CHECK_INPUT is also an error. Now compiled successfully. Thank you!

hanxuel avatar Mar 17 '22 16:03 hanxuel

try pip install torch_scatter (which is missing from requirements.txt btw @sunset1995 )

Okay! I thought CHECK_INPUT is also an error. Now compiled successfully. Thank you!

I meet the same problem with you.Have you solve this CHECK_INPUT error?

Bwwm92 avatar Mar 21 '22 13:03 Bwwm92

Can you post the full error message?

PNeigel avatar Mar 21 '22 15:03 PNeigel

Can you post the full error message?

Detected CUDA files, patching ldflags Emitting ninja build file /home/wwm/.cache/torch_extensions/py37_cu102/adam_upd_cuda/build.ninja... Building extension module adam_upd_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) ninja: no work to do. Loading extension module adam_upd_cuda... Segmentation fault (core dumped)

Bwwm92 avatar Mar 22 '22 04:03 Bwwm92

I encountered segmentation fault when using the wrong pytorch or cuda version. Can you verify your pytorch installation is matched with your cuda version?

sunset1995 avatar Mar 29 '22 06:03 sunset1995

I have the same error, except with FAILED: render_utils_kernel.cuda.o.

System: Windows 10 CUDA: 11.3 torch: 1.8.2+cu111 using x64 cl.exe from visual studio 17

The main error seems to be

C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: identifier "pow<double, float, void> " is undefined in device code
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_backward_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: identifier "pow<double, float, void> " is undefined in device code
4 errors detected in the compilation of "C:/Files/repos/DirectVoxGO/lib/cuda/render_utils_kernel.cu".

so pow() from lines 368 & 404 seems to be a __host__ function (i.e CPU? i don't know) while raw2alpha_cuda_kernel and raw2alpha_backward_cuda_kernel are defined as __global__ functions. Where exactly is pow defined?

Hello, I am also windows, torch 1.12.0 with cuda 11.6, vs 2017 But I'm running into an unsolvable problem, which is CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. It makes my implementation of this code stall Do you have any suggestions or comments, or is it convenient for you to leave your contact e-mail to better explain the problem?

Ballzy0706 avatar Aug 05 '22 07:08 Ballzy0706

I have the same error, except with FAILED: render_utils_kernel.cuda.o. System: Windows 10 CUDA: 11.3 torch: 1.8.2+cu111 using x64 cl.exe from visual studio 17 The main error seems to be

C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(368): error: identifier "pow<double, float, void> " is undefined in device code
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: calling a __host__ function("std::conditional< ::std::is_same_v|| ::std::is_same_v, long double,    ::std::conditional< ::std::is_same_v&& ::std::is_same_v, float, double> ::type> ::type  ::pow<double, float, void> (T1, T2)") from a __global__ function("raw2alpha_backward_cuda_kernel<double> ") is not allowed
C:\Files\repos\DirectVoxGO\lib\cuda\render_utils_kernel.cu(404): error: identifier "pow<double, float, void> " is undefined in device code
4 errors detected in the compilation of "C:/Files/repos/DirectVoxGO/lib/cuda/render_utils_kernel.cu".

so pow() from lines 368 & 404 seems to be a __host__ function (i.e CPU? i don't know) while raw2alpha_cuda_kernel and raw2alpha_backward_cuda_kernel are defined as __global__ functions. Where exactly is pow defined?

Hello, I am also windows, torch 1.12.0 with cuda 11.6, vs 2017 But I'm running into an unsolvable problem, which is CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. It makes my implementation of this code stall Do you have any suggestions or comments, or is it convenient for you to leave your contact e-mail to better explain the problem?

(base) C:\Users\shower>python D:\DirectVoxGO-main\run.py --config configs/nerf/lego.py --render_test Using C:\Users\shower\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu116 as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file C:\Users\shower\AppData\Local\torch_extensions\torch_extensions\Cache\py39_cu116\adam_upd_cuda\build.ninja... Building extension module adam_upd_cuda... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/2] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc --generate-dependencies-with-compile --dependency-output adam_upd_kernel.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=adam_upd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\shower\anaconda3\lib\site-packages\torch\include -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\TH -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" -IC:\Users\shower\anaconda3\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -c D:\DirectVoxGO-main\lib\cuda\adam_upd_kernel.cu -o adam_upd_kernel.cuda.o FAILED: adam_upd_kernel.cuda.o C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\nvcc --generate-dependencies-with-compile --dependency-output adam_upd_kernel.cuda.o.d -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler /MD -DTORCH_EXTENSION_NAME=adam_upd_cuda -DTORCH_API_INCLUDE_EXTENSION_H -IC:\Users\shower\anaconda3\lib\site-packages\torch\include -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\torch\csrc\api\include -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\TH -IC:\Users\shower\anaconda3\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\include" -IC:\Users\shower\anaconda3\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_86,code=sm_86 -c D:\DirectVoxGO-main\lib\cuda\adam_upd_kernel.cu -o adam_upd_kernel.cuda.o C:/Users/shower/anaconda3/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符 C:/Users/shower/anaconda3/lib/site-packages/torch/include\c10/macros/Macros.h(143): warning C4067: 预处理器指令后有意外标记 - 应输入换行符 C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.6/include\thrust/detail/config/cpp_dialect.h:118: warning: Thrust requires at least MSVC 2019 (19.20/16.0/14.20). MSVC 2017 is deprecated but still supported. MSVC 2017 support will be removed in a future release. Define THRUST_IGNORE_DEPRECATED_CPP_DIALECT to suppress this message. C:/Users/shower/anaconda3/lib/site-packages/torch/include\c10/core/SymInt.h(84): warning #68-D: integer conversion resulted in a change of sign

c:\users\shower\anaconda3\lib\site-packages\torch\include\pybind11\cast.h(1429): error: too few arguments for template template parameter "Tuple" detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]" (1507): here

c:\users\shower\anaconda3\lib\site-packages\torch\include\pybind11\cast.h(1503): error: too few arguments for template template parameter "Tuple" detected during instantiation of class "pybind11::detail::tuple_caster<Tuple, Ts...> [with Tuple=std::pair, Ts=<T1, T2>]" (1507): here

2 errors detected in the compilation of "D:/DirectVoxGO-main/lib/cuda/adam_upd_kernel.cu". adam_upd_kernel.cu ninja: build stopped: subcommand failed. Traceback (most recent call last): File "C:\Users\shower\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1808, in _run_ninja_build subprocess.run( File "C:\Users\shower\anaconda3\lib\subprocess.py", line 528, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "D:\DirectVoxGO-main\run.py", line 13, in from lib import utils, dvgo, dcvgo, dmpigo File "D:\DirectVoxGO-main\lib\utils.py", line 11, in from .masked_adam import MaskedAdam File "D:\DirectVoxGO-main\lib\masked_adam.py", line 8, in adam_upd_cuda = load( File "C:\Users\shower\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1202, in load return _jit_compile( File "C:\Users\shower\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1425, in _jit_compile _write_ninja_file_and_build_library( File "C:\Users\shower\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1537, in _write_ninja_file_and_build_library _run_ninja_build( File "C:\Users\shower\anaconda3\lib\site-packages\torch\utils\cpp_extension.py", line 1824, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'adam_upd_cuda'

I have exported my error to help you find my problem.

Ballzy0706 avatar Aug 06 '22 03:08 Ballzy0706

Hi @Ballzy0706 were you able to fix this RuntimeError: Error building extension 'adam_upd_cuda' ? I am getting the same runtime error message as you.

I am using windows with the following config:

g++ (Rev3, Built by MSYS2 project) 12.1.0

gcc (Rev3, Built by MSYS2 project) 12.1.0

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2022 NVIDIA Corporation Built on Tue_May__3_19:00:59_Pacific_Daylight_Time_2022 Cuda compilation tools, release 11.7, V11.7.64 Build cuda_11.7.r11.7/compiler.31294372_0

Python 3.10.5

any help is appreciated.

P.S.: I have tried with cuda versions 11.7, 11.6 and 11.3 and they all give me the sames errors

Best regards

supdhn avatar Aug 16 '22 13:08 supdhn

Sorry I just saw your question.

I found that the .cpp file in this article is more demanding for my environment, just cuda==11.3 and vscode==2019 can meet the running requirements.

hope the answer could help you.

Best regards

Ballzy0706 avatar Aug 20 '22 09:08 Ballzy0706