faster-rcnn.pytorch
faster-rcnn.pytorch copied to clipboard
Not compiled with GPU support (nms at /home/yuezhangw/Program/Faster_RCNN_torch101/faster-rcnn.pytorch/lib/model/csrc/nms.h:22)
The processing of compiling is successful, without errors but a warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
Then, when I try train_val.py, errors occured: Not compiled with GPU support (nms at /home/yuezhangw/Program/Faster_RCNN_torch101/faster-rcnn.pytorch/lib/model/csrc/nms.h:22)
it seems like that i had not compiled nms into GPU env, but the compiling was no error. i used py36 to complie, maybe i should try py27?As a new hand, I don't know how to use a different version Python in one env, one for complie, one for run program?
SO, anyone had the same problem?Thanks for the helping!
Do you have _C........pyd file after compiling in /lib or more deep folder?
Do you have _C........pyd file after compiling in /lib or more deep folder?
yes, but it didn't work
It seems that code can't detect CUDA device.
torch.cuda.is_available()
return True for you?
If yes, try to remove CUDA_HOME
from this line.
Of course, i hope you use PyTorch 1.0 branch.
It seems that code can't detect CUDA device.
torch.cuda.is_available()
return True for you?If yes, try to remove
CUDA_HOME
from this line.Of course, i hope you use PyTorch 1.0 branch.
maybe you are right, i will try it tomorrow! Much thanks to your reply!
It seems that code can't detect CUDA device.
torch.cuda.is_available()
return True for you?If yes, try to remove
CUDA_HOME
from this line.Of course, i hope you use PyTorch 1.0 branch.
yes, I used Pytorch1.0 branch,the readme file can prove it if torch.cuda.is_available() and CUDA_HOME is not None I try to remove part of this, and it turns to
if torch.cuda.is_available() , torch.cuda.is_available() always returns Ture
then I run the file setup.py again, error occurs:
Traceback (most recent call last):
File "setup.py", line 65, in
do you know how to deal with it?Thank you ever so much
Ok, i see. You use anaconda and i suppose you install CUDA through it, so it is not the entire package.
Install CUDA drivers manually https://developer.nvidia.com/cuda-downloads
After installation, pytorch would be able to access the CUDA path. You can check it:
import os
print(os.environ.get('CUDA_PATH')
Excuse me! I meet the same problem with yours, do you solve it now? if yes, please share the solution, thanks!
Excuse me! I meet the same problem with yours, do you solve it now? if yes, please share the solution, thanks!
sorry,i did not solve it till now
Before compiling setup.py, you should make sure that print(torch.cuda.is_available()) returns True. I've solved this problem!