DCNv2
DCNv2 copied to clipboard
The support for 3080 or 3090
Hi! I have got a 3090 GPU. However, I find some problems when I compile DCN The system is Ubuntu 18.04, the version of PyTorch is 1.7.0. the problem is nvcc fatal : Unsupported gpu architecture 'compute_86' I do not know how to do it?
I am also trying to make DCNv2 compile on a system with RTX 3070 and CUDA11. Seems like this library does not compile on CUDA with Nvidia compute capability compute_86 (CUDA 11.x) that ampere cards require. Here's a source saying which compute versions support which cards - https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.
I am also trying to make DCNv2 compile on a system with RTX 3070 and CUDA11. Seems like this library does not compile on CUDA with Nvidia compute capability compute_86 (CUDA 11.x) that ampere cards require. Here's a source saying which compute versions support which cards - https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/.
Thanks, So there is no feasible way to solve it?
I would like to confirm that I was able to compile and use the the library with this fork https://github.com/MatthewHowe/DCNv2.git
Hardware: I am using RTX 3070
My conda environment:
- python 3.8.5
- cudatoolkit 11.0.221
- pytorch 1.7.0 (py3.8_cuda11.0.221_cudnn8.0.3_0) [n.b. does not compile on nightly build]
My system environment:
- CUDA 11.1 (as reported by nvidia-smi)
- Cuda compilation tools, release 11.1, V11.1.105 (as reported by nvcc --version)
@limmor1 Thanks! complie successfully through your shared link in RTX 3090 platform!
@limmor1 Thanks! complie successfully through your shared link in RTX 3090 platform!
I am also can approve this setup for https://github.com/zju3dv/snake on RTX 3090 platform. The only difference was that I compiled pytorch 1.7.0 from source and without installing the Cuda toolkit in the conda environment.
If this is due to the PyTorch issue, I think I know how to fix.
To fix this, first open /home/user/.local/lib/python3.6/site-packages/torch/utils/cpp_extension.py
, and comment flags.append('-gencode=arch=compute_{},code=sm_{}'.format(num, num))
around ln1441 in cpp_extension.py.
I was able to run make.sh with no trouble, but don't know why.
For those who are still running into this issue, I fixed this by adding an additional argument to tell nvcc which arch I want to build on in the setup script
extra_compile_args["nvcc"] = [
"-DCUDA_HAS_FP16=1",
"-D__CUDA_NO_HALF_OPERATORS__",
"-D__CUDA_NO_HALF_CONVERSIONS__",
"-D__CUDA_NO_HALF2_OPERATORS__",
"-arch=sm_75"
]
The original build script is using CUDAExtension
from torch, which will use the latest GPU arch available. In some systems, the active Cuda version does not support the latest arch. For example, I used an A100 smx4 (latest arch is sm_80) with Cuda 10.2 (support only sm_75 and older versions). Since most Nvidia graphic cards support building with older arch versions, what I did was specifying the arch version that is compatible with both my Cuda version and my graphic card.
I can confirm that @CaoHoangTung suggestion above worked for me as well. As stated in the README file, I am using a GeForce RTX 3090 Ti plus all dependency versions as defined in the README.