yolact
yolact copied to clipboard
How to run eval.py without cuda?
Hello, I'm trying to run eval.py, but got an error. The error message is:
Traceback (most recent call last): File "eval.py", line 990, in
torch.set_default_tensor_type('torch.cuda.FloatTensor') File "/home/administrator/anaconda3/lib/python3.7/site-packages/torch/init.py", line 158, in set_default_tensor_type _C._set_default_tensor_type(t) File "/home/administrator/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py", line 161, in _lazy_init _check_driver() File "/home/administrator/anaconda3/lib/python3.7/site-packages/torch/cuda/init.py", line 75, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
I don't have gpu graphic card on my pc, and how to run eval.py without cuda? Thanks.
Oof
While I was implementing this, I considered writing it in a way that would also support running it on the CPU. However, pretty early on I decided that in order to squeeze as much speed out of the model as possible, I would optimize 100% for GPU usage and ignore running it on the CPU. Adding CPU support now would require a lot of changes.
I'll keep this issue open in case I do get around to adding CPU support, but for now it's not a high priority.
@jhsiao21 I added some codes to run eval.py on CPU. I guess it works correctly. If you have any time, please try it! https://github.com/ar90n/yolact/tree/add-evaluation-without-cuda-support
@jhsiao21 Ive made some some modifications such that eval.py runs on cpu too. You can try it out. https://github.com/HariharanJ9779/yolact
@ar90n @HariharanJ9779 What was the FPS on CPU? With which CPU your used?
Thanks!
@sdimantsd Running evaluation on videos is very slow. Intel CPU works fine.
@bond9779 thank you, it worked on my laptop (Thinkpad running ubuntu).
It is indeed pretty slow, you can expect approximately about 1-2 fps. The CPU I have is :
$ cat /proc/cpuinfo | grep "model name"
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
@jerpint @bond9779 Thx!
@bond9779 thank you, it worked on my laptop (Thinkpad running ubuntu).
It is indeed pretty slow, you can expect approximately about 1-2 fps. The CPU I have is :
$ cat /proc/cpuinfo | grep "model name" model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
Hey @jerpint Could you send me the code that @bond9779 made for CPU, the link is not working could you upload it on Google drive or something and send me the link?
Thanks alot
@bond9779 thank you, it worked on my laptop (Thinkpad running ubuntu). It is indeed pretty slow, you can expect approximately about 1-2 fps. The CPU I have is :
$ cat /proc/cpuinfo | grep "model name" model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz model name : Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
Hey @jerpint Could you send me the code that @bond9779 made for CPU, the link is not working could you upload it on Google drive or something and send me the link?
Thanks alot
Hey @sree3333 or @jerpint have you recived any code or can you provide any code made for the cpu for me?
Thanks!
@sch1eicher @sree3333 it seems that the original repo I took the fork from was removed, I forked my own and added the code there:
here the branch on my fork where I got it to work:
https://github.com/jerpint/yolact/tree/yolact-cpu
To use it:
git clone https://github.com/jerpint/yolact/
git checkout yolact-cpu
python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4
@sch1eicher @sree3333 it seems that the original repo I took the fork from was removed, I forked my own and added the code there:
here the branch on my fork where I got it to work:
https://github.com/jerpint/yolact/tree/yolact-cpu
To use it:
git clone https://github.com/jerpint/yolact.git git checkout yolact-cpu python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4
Works like a charm. Thanks a lot!
which torch version should I use , I installed multiple version for torch and torchvision like ''' !pip install torch==1.3.0 torchvision==0.4.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
''' but still git this message : File "C:\Users\Mohammed\AppData\Roaming\Python\Python37\site-packages\torch\cuda_init_.py", line 192, in _lazy_init check_driver() File "C:\Users\Mohammed\AppData\Roaming\Python\Python37\site-packages\torch\cuda_init.py", line 95, in _check_driver raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
by run this comand: !python ./yolact/eval.py --trained_model=./yolact_base_54_800000.pth --cuda=False --config=yolact_base_config --score_threshold=0.15 --top_k=15 --images=test_images:output_images
@elhmadany did you try the solution I posted above? I haven't modified it since, so it should still work. I don't think the pytorch versions will change much at this point.
@jerpint ,thank you for your response . yes. i used git clone from your repo and run it in jupyter notebook but still the same problem regrading to CUDA by this message after I removed (--cuda=False) from comand :
Traceback (most recent call last):
File "./yolact/eval.py", line 1078, in
in addition I checked CUDA status by:
import torch
print(torch.version.cuda)
print(torch.cuda.device_count())
print(torch.cuda.is_available())
---> None 0 False
@elhmadany what are you running inside a notebook exactly? The fork will only allow you to run the command I posted, anything else in that repo probably won't work out of the box on cpu.
As for the cuda statuses you get, they are e what you should expect when running on cpu.
yes, my objective is run yolcat on CPU only with out cuda. but I am still I do not know where is the issue . and in notebook I just installed the used packages like (cython,opencv-python,pillow)and torch==1.2.0+cpu torchvision==0.4.0+cpu. I think my problem in torch . do I reinstall it a gain by another version or what do you see. I really appreciate your help
now ,it works good with me on images after comment and modify the eval.py and augmentations.py files where is cuda used.
@sch1eicher @sree3333 it seems that the original repo I took the fork from was removed, I forked my own and added the code there:
here the branch on my fork where I got it to work:
https://github.com/jerpint/yolact/tree/yolact-cpu
To use it:
git clone https://github.com/jerpint/yolact/ git checkout yolact-cpu python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --video_multiframe=4
I am getting an error while loading the weights : Error(s) in loading state_dict for Yolact I followed the steps that you have mentioned. Any fix for this?
@sch1eicher @sree3333@arjungoel7 when i use it : "git clone https://github.com/jerpint/yolact/ git checkout yolact-cpu python eval.py --trained_model=weights/yolact_plus_resnet50_pascal_441_190000.pth --score_threshold=0.01 --top_k=5 --images="./test_single":"./genout1" I am getting an error: [sherry@xiaoi2018-754 yolact-yolact-cpu]$ CUDA_VISIBLE_DEVICES=-1 sh run_test.sh Config not specified. Parsed yolact_plus_resnet50_pascal_config from the file name.
Loading model...Traceback (most recent call last):
File "eval.py", line 1031, in
@shakey-cuimiao Yes I get the same error
@arjungoel7 @ jerpint @ar90n elhmadany when i run python eval.py --trained_model=weights/weights/yolact_base_54_800000.pth --score_threshold=0.01 --top_k=5 --images="./test_single":"./genout1" ,it is ok, But i run yolact++ ,I am getting an error :Error(s) in loading state_dict for Yolact,Unexpected key(s) in state_dict: "maskiou_net.maskiou_net.0.weight",
I haven't used this repo in a while. Most likely the models have been updated while the branch I created a while back has remained stale. Try merging master of yolact back into the branch I created, that might be your best bet.
@jhsiao21 I added some codes to run eval.py on CPU. I guess it works correctly. If you have any time, please try it! https://github.com/ar90n/yolact/tree/add-evaluation-without-cuda-support
I'm trying to run yolact++ on CPU using your code. i made changes to load_weights
function in yolact.py
def load_weights(self, path,map_location=None):
""" Loads weights from a compressed save file. """
#state_dict = torch.load(path)
state_dict = torch.load(path, map_location=map_location)
# For backward compatability, remove these (the new variable is called layers)
for key in list(state_dict.keys()):
if key.startswith('backbone.layer') and not key.startswith('backbone.layers'):
del state_dict[key]
# Also for backward compatibility with v1.0 weights, do this check
if key.startswith('fpn.downsample_layers.'):
if cfg.fpn is not None and int(key.split('.')[2]) >= cfg.fpn.num_downsample:
del state_dict[key]
self = nn.DataParallel(self)
self.module.load_state_dict(state_dict)
And this was the error. it works fine for yolact, but does not work with yolact++ resnet101.
Loading model...Traceback (most recent call last):
File "eval.py", line 1110, in <module>
net.load_weights(args.trained_model, map_location=map_location)
File "/content/yolact_gpu/yolact.py", line 495, in load_weights
self.module.load_state_dict(state_dict)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 830, in load_state_dict
self.__class__.__name__, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for Yolact:
Unexpected key(s) in state_dict: "backbone.layers.1.0.conv2.bias", "backbone.layers.1.0.conv2.conv_offset_mask.weight", "backbone.layers.1.0.conv2.conv_offset_mask.bias", "backbone.layers.1.3.conv2.bias", "backbone.layers.1.3.conv2.conv_offset_mask.weight", "backbone.layers.1.3.conv2.conv_offset_mask.bias", "backbone.layers.2.0.conv2.bias", "backbone.layers.2.0.conv2.conv_offset_mask.weight", "backbone.layers.2.0.conv2.conv_offset_mask.bias", "backbone.layers.2.3.conv2.bias", "backbone.layers.2.3.conv2.conv_offset_mask.weight", "backbone.layers.2.3.conv2.conv_offset_mask.bias", "backbone.layers.2.6.conv2.bias", "backbone.layers.2.6.conv2.conv_offset_mask.weight", "backbone.layers.2.6.conv2.conv_offset_mask.bias", "backbone.layers.2.9.conv2.bias", "backbone.layers.2.9.conv2.conv_offset_mask.weight", "backbone.layers.2.9.conv2.conv_offset_mask.bias", "backbone.layers.2.12.conv2.bias", "backbone.layers.2.12.conv2.conv_offset_mask.weight", "backbone.layers.2.12.conv2.conv_offset_mask.bias", "backbone.layers.2.15.conv2.bias", "backbone.layers.2.15.conv2.conv_offset_mask.weight", "backbone.layers.2.15.conv2.conv_offset_mask.bias", "backbone.layers.2.18.conv2.bias", "backbone.layers.2.18.conv2.conv_offset_mask.weight", "backbone.layers.2.18.conv2.conv_offset_mask.bias", "backbone.layers.2.21.conv2.bias", "backbone.layers.2.21.conv2.conv_offset_mask.weight", "backbone.layers.2.21.conv2.conv_offset_mask.bias", "backbone.layers.3.0.conv2.bias", "backbone.layers.3.0.conv2.conv_offset_mask.weight", "backbone.layers.3.0.conv2.conv_offset_mask.bias".
Any reference or leads would help. Thank you.
@jerpint Hello. I changed eval.py code of YOLACT repo based on your eval code, but I can't inference with CPU as before. By the way, I always get "RuntimeError: CUDA error: out of memory". Please help me. Many thanks
Reference: https://github.com/SpaceView/yolact_cpu I have just tested in on windows, it works fine. Hope it can help a little bit. I found that most of their work didn't remove the parallel part, the torch parallel will automatically select CUDA if there is one. Main changes are made as below, net = net.cpu() #net = CustomDataParallel(net) transform = FastBaseTransform() #transform = torch.nn.DataParallel(FastBaseTransform())
@SpaceView Thanks a lot for sharing this link :)