DCNv2
DCNv2 copied to clipboard
DCN v2 compilation for RTX 3080
Hello,
I am not able to compile DCN v2. My configuraiton is the following:
conda 4.9.2 python 3.8.8 pytorch 1.8.1 Build cuda_11.2.r11.2/compiler.29618528_0 driver version: 460.39 CUDA Version 11.2
I have tried many DCN v2 forks to compile but I was not able succeed. Any helpd would be appreciated :)
Thank you Fatih.
Have you solved the problem? I need to compile DCN v2 for RTX 3090
No, unfortunately, I have still been struggling to compile. I experience many driver cuda toolkit cudnn issues. Because of this, I cannot experiment on CenterNet detector and FairMOT tracker.
I run https://github.com/MatthewHowe/DCNv2 version at 3090,with cuda11.0.2, othter configuration: python3.7.9 pytorch1.7.0 gcc 9 g++ 9(this is important, using g++10 will make a error)
Not sure whether you already tried this, but this helped me resolve compile issues for a 3080: https://github.com/CharlesShang/DCNv2/issues/57#issuecomment-618176625
My environment: driver 460.32.03, cuda 11.1 (by nvcc -V). In my case:
Both PyTorch 1.8.1 and 1.7.1 and compile https://github.com/jinfagang/DCNv2_latest and cannot pass the testcuda.py without changing the tolerance.
Only PyTorch 1.7.1 works for https://github.com/MatthewHowe/DCNv2 and can pass the testcuda.py without any modification. However, PyTorch 1.8.1 doesn't work for that.
Hope these help!
@capricornfati My CUDA version is also 11.2 and I cannot compile this branch either. Have you solved this problem?
is there any progress with the compilation of the DCNv2 on the A100 or the RTX 3000 series for cuda 11.3?
I have 3090 and 3080 but no way. I cannot figure out whats happening. I use 1080ti for dcn v2 experiments. What a pity.
The following docker image seems to compile DCNv2 fine but then fails the testcuda part for me (RTX 3080, Drivers 460.84). Running inference in applications that require DCNv2 works fine and I have not yet observed any deviations from expected outputs.
FROM pytorch/pytorch:1.8.1-cuda11.1-cudnn8-devel
RUN python -c 'import torch; assert torch.cuda.is_available(), "Cuda is not available. Make sure you have enabled docker GPU daemon"'
RUN apt-get update
RUN apt-get install -y git gcc
RUN git clone https://github.com/jinfagang/DCNv2_latest.git /var/DCNv2
RUN cd /var/DCNv2 && python setup.py build develop
RUN cd /var/DCNv2 && python testcuda.py
I have same issue with RTX 3090. I can compile with PyTorch 1.7 but not with PyTorch 1.8.1.
gcc 9
Hello, I use the same RTX3090, pytorch and cuda configuration as you, but the following problem occurs Unsupported gpu architecture 'compute_86'
你好,我是张仁杰,我已经收到你的邮件~我会尽快阅读你发来的邮件~
This suggestion did it for me. Running an RTX 3090 Ti with all other dependencies specified in the README.
你好,我是张仁杰,我已经收到你的邮件~我会尽快阅读你发来的邮件~