GTR icon indicating copy to clipboard operation
GTR copied to clipboard

RuntimeError: Detectron2 is not compiled with GPU support!

Open hxue3 opened this issue 2 years ago • 6 comments

I am trying to run the demo command and saw this error

[04/05 14:46:45 detectron2]: Arguments: Namespace(confidence_threshold=0.5, config_file='configs/GTR_TAO_DR2101.yaml', cpu=False, input=None, opts=['MODEL.WEIGHTS', 'models/GTR_TAO_DR2101.pth'], output='output/demo_yfcc.mp4', video_input='docs/yfcc_v_acef1cb6d38c2beab6e69e266e234f.mp4', webcam=None) WARNING [04/05 14:46:45 d2.config.compat]: Config 'configs/GTR_TAO_DR2101.yaml' has no VERSION. Assuming it to be compatible with latest v2.

[04/05 14:47:30 fvcore.common.checkpoint]: [Checkpointer] Loading from models/GTR_TAO_DR2101.pth ... WARNING [04/05 14:47:37 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint: roi_heads.box_predictor.0.freq_weight roi_heads.box_predictor.1.freq_weight roi_heads.box_predictor.2.freq_weight WARNING [04/05 14:47:37 fvcore.common.checkpoint]: The checkpoint state_dict contains keys that are not used by the model: roi_heads.pos_emb.weight Could not find encoder for codec id 27: Encoder not found [ERROR:0] global /io/opencv/modules/videoio/src/cap.cpp (392) open VIDEOIO(CV_IMAGES): raised OpenCV exception:

OpenCV(4.1.2) /io/opencv/modules/videoio/src/cap_images.cpp:253: error: (-5:Bad argument) CAP_IMAGES: can't find starting number (in the name of file): /tmp/video_format_test3gfay7is/test_file.mkv in function 'icvExtractPattern'

Traceback (most recent call last): File "demo.py", line 161, in for vis_frame in demo.run_on_video(video): File "/home/jupyter/GTR/gtr/predictor.py", line 146, in run_on_video outputs = self.video_predictor(frames) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/jupyter/GTR/gtr/predictor.py", line 102, in call predictions = self.model(inputs) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/GTR/gtr/modeling/meta_arch/gtr_rcnn.py", line 61, in forward return self.sliding_inference(batched_inputs) File "/home/jupyter/GTR/gtr/modeling/meta_arch/gtr_rcnn.py", line 81, in sliding_inference instances_wo_id = self.inference( File "/home/jupyter/GTR/gtr/modeling/meta_arch/custom_rcnn.py", line 107, in inference features = self.backbone(images.tensor) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/detectron2/detectron2/modeling/backbone/fpn.py", line 126, in forward bottom_up_features = self.bottom_up(x) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/GTR/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py", line 630, in forward x = stage(x) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/container.py", line 119, in forward input = module(input) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/GTR/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/res2net.py", line 457, in forward sp = self.convs[i](sp, offset, mask) File "/opt/conda/envs/gtr/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/jupyter/detectron2/detectron2/layers/deform_conv.py", line 474, in forward x = modulated_deform_conv( File "/home/jupyter/detectron2/detectron2/layers/deform_conv.py", line 221, in forward _C.modulated_deform_conv_forward( RuntimeError: Detectron2 is not compiled with GPU support!

hxue3 avatar Apr 05 '22 14:04 hxue3

Hi,

Our model needs GPU for inference. Can check here and make sure your detectron2 is installed with GPU support?

xingyizhou avatar Apr 05 '22 15:04 xingyizhou

Thanks. I have solved the problem. However, I noticed that when I tried a larger video file for the demo, it always says killed to stop the process. Is there a limit how big a video file can be handled byt GTR?

hxue3 avatar Apr 05 '22 16:04 hxue3

Hi, I have the same problem as you, how did you solve it?

Yamahhh avatar Apr 09 '22 12:04 Yamahhh

Hi @hxue3 and @kele-git ,

The OOM issue for long videos is due to that we decode the entire compressed video into 3D tensors in preprocessing. There should be a (simple) way to only decode frames in a sliding window. I'll look into this hopefully in the following week(s), but feel free to play with it yourself.

Best, Xingyi

xingyizhou avatar Apr 09 '22 16:04 xingyizhou

Hi, I have the same problem as you, how did you solve it?

Did you manage to solve it?

eliahuhorwitz avatar Apr 19 '22 15:04 eliahuhorwitz

Is the detected model combined with the tracked model? I didn't find the output of the test. I look forward to your answer?

xin9726 avatar May 13 '22 13:05 xin9726