SparseInst icon indicating copy to clipboard operation
SparseInst copied to clipboard

I can't find the video result

Open BibibNanana opened this issue 2 years ago • 12 comments

When I finished run my code, I can‘t find my result, I want to know how to save my video result?

This is my order: python demo.py --config-file configs/sparse_inst_r50_dcn_giam_aug.yaml --video-input fissure.mp4 --output results --opt MODEL.WEIGHTS model_0144999.pth INPUT.MIN_SIZE_TEST 512

But when I finished run, nothing here is preserved image

BibibNanana avatar Sep 28 '22 02:09 BibibNanana

Has the video input been evaluated? If it's the case then you asked your results to be saved in the result directory but it looks like this directory doesn't even exist. Just create it or change your --output argument to another existing directory.

leandro-svg avatar Sep 28 '22 05:09 leandro-svg

Has the video input been evaluated? If it's the case then you asked your results to be saved in the result directory but it looks like this directory doesn't even exist. Just create it or change your --output argument to another existing directory.

image I create results directory, but I still can't find my video result image image

BibibNanana avatar Sep 29 '22 02:09 BibibNanana

Has the video input been evaluated? If it's the case then you asked your results to be saved in the result directory but it looks like this directory doesn't even exist. Just create it or change your --output argument to another existing directory.

Please help me, thanks.

BibibNanana avatar Oct 02 '22 09:10 BibibNanana

Dear @BibibNanana, When looking at your logs, it looks like you have two errors and one of them is about your own trained model : --> Could not find encoder for codec id 27 : Encoder not found What I would suggest is to used the demo script with a pretrained model given by the author such as sparse_inst_r50_giam_aug_2b7d68.pth . If there is still an error occuring with these weights, then it is linked to this openCV exception : --> CAP_IMAGES: can't find starting number

It there aren't any error, then it is most likely your model pytorch that isn't rightly trained or something.

If you want, you could share your model such that I could try on my side.

Best of luck !

leandro-svg avatar Oct 03 '22 06:10 leandro-svg

Additionally, your video format may not be the right one to use, cfr line 154 : fourcc=cv2.VideoWriter_fourcc(*"x264"), , you could try to change to "mp4v" .

leandro-svg avatar Oct 03 '22 07:10 leandro-svg

Additionally, your video format may not be the right one to use, cfr line 154 : fourcc=cv2.VideoWriter_fourcc(*"x264"), , you could try to change to "mp4v" .

I use sparse_inst_r101_giam_7b62ea.pth, but still have some errors image

BibibNanana avatar Oct 03 '22 09:10 BibibNanana

Dear @BibibNanana, When looking at your logs, it looks like you have two errors and one of them is about your own trained model : --> Could not find encoder for codec id 27 : Encoder not found What I would suggest is to used the demo script with a pretrained model given by the author such as sparse_inst_r50_giam_aug_2b7d68.pth . If there is still an error occuring with these weights, then it is linked to this openCV exception : --> CAP_IMAGES: can't find starting number

It there aren't any error, then it is most likely your model pytorch that isn't rightly trained or something.

If you want, you could share your model such that I could try on my side.

Best of luck !

This is my model 链接:https://pan.baidu.com/s/1b0CYi8uRZBqq4m-kg7Mzgg?pwd=1234 提取码:1234

BibibNanana avatar Oct 03 '22 09:10 BibibNanana

It is linked to your videoWriter_fourcc. You could change the x264 format to "mp4v" or "h264" : fourcc = cv2.VideoWriter_fourcc('h264') #or #fourcc = cv2.VideoWriter_fourcc('x264') #or #fourcc = cv2.VideoWriter_fourcc(*'mp4v')

If this still doesn't work, I suggest you to re-install openCV, with sudo apt and not pip if you have used pip in the past. OpenCV installed through Pip may cause some problems with those format type.

leandro-svg avatar Oct 03 '22 09:10 leandro-svg

It is linked to your videoWriter_fourcc. You could change the x264 format to "mp4v" or "h264" : fourcc = cv2.VideoWriter_fourcc('h264') #or #fourcc = cv2.VideoWriter_fourcc('x264') #or #fourcc = cv2.VideoWriter_fourcc(*'mp4v')

If this still doesn't work, I suggest you to re-install openCV, with sudo apt and not pip if you have used pip in the past. OpenCV installed through Pip may cause some problems with those format type.

I will be try it, thanks. I have another problem, I train my own datasets, but why AP so low? image

BibibNanana avatar Oct 05 '22 00:10 BibibNanana

It is linked to your videoWriter_fourcc. You could change the x264 format to "mp4v" or "h264" : fourcc = cv2.VideoWriter_fourcc('h264') #or #fourcc = cv2.VideoWriter_fourcc('x264') #or #fourcc = cv2.VideoWriter_fourcc(*'mp4v')

If this still doesn't work, I suggest you to re-install openCV, with sudo apt and not pip if you have used pip in the past. OpenCV installed through Pip may cause some problems with those format type.

I find my video, I really appreciate you help. image

BibibNanana avatar Oct 05 '22 01:10 BibibNanana

为什么用自己训练的权重来测试视频会出现这种错呢? Traceback (most recent call last): File "demo.py", line 164, in for vis_frame in tqdm.tqdm(demo.run_on_video(video, args.confidence_threshold), total=num_frames): File "/root/anaconda3/envs/cyy/lib/python3.8/site-packages/tqdm-4.65.0-py3.8.egg/tqdm/std.py", line 1178, in iter for obj in iterable: File "/remote-home/cs_cst_yy/cyy/SparseInst-main/sparseinst/d2_predictor.py", line 136, in run_on_video yield process_predictions(frame, self.predictor(frame)) File "/remote-home/cs_cst_yy/cyy/SparseInst-main/sparseinst/d2_predictor.py", line 103, in process_predictions vis_frame = video_visualizer.draw_instance_predictions( File "/remote-home/cs_cst_yy/cyy/detectron2-0.3/detectron2/utils/video_visualizer.py", line 84, in draw_instance_predictions detected = [ File "/remote-home/cs_cst_yy/cyy/detectron2-0.3/detectron2/utils/video_visualizer.py", line 85, in _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8) TypeError: 'NoneType' object is not subscriptable

116022017144 avatar Mar 29 '23 09:03 116022017144

为什么用自己训练的权重来测试视频会出现这种错呢? Traceback (most recent call last): File "demo.py", line 164, in for vis_frame in tqdm.tqdm(demo.run_on_video(video, args.confidence_threshold), total=num_frames): File "/root/anaconda3/envs/cyy/lib/python3.8/site-packages/tqdm-4.65.0-py3.8.egg/tqdm/std.py", line 1178, in iter for obj in iterable: File "/remote-home/cs_cst_yy/cyy/SparseInst-main/sparseinst/d2_predictor.py", line 136, in run_on_video yield process_predictions(frame, self.predictor(frame)) File "/remote-home/cs_cst_yy/cyy/SparseInst-main/sparseinst/d2_predictor.py", line 103, in process_predictions vis_frame = video_visualizer.draw_instance_predictions( File "/remote-home/cs_cst_yy/cyy/detectron2-0.3/detectron2/utils/video_visualizer.py", line 84, in draw_instance_predictions detected = [ File "/remote-home/cs_cst_yy/cyy/detectron2-0.3/detectron2/utils/video_visualizer.py", line 85, in _DetectedInstance(classes[i], boxes[i], mask_rle=None, color=None, ttl=8) TypeError: 'NoneType' object is not subscriptable

Did you solve it? I have the same error. Tks

g110056012 avatar Apr 25 '23 07:04 g110056012