VNext
VNext copied to clipboard
Could you please provide the demo.py
Could you please provide the demo.py for us to display visualization results like Detectron2. But the one in Detectron2 is for image-leve and it cannot be used directly.
https://github.com/lalalafloat/VNext/blob/main/projects/IDOL/demo_idol.py
python projects/IDOL/demo_idol.py --config-file projects/IDOL/configs/coco_pretrain/r50_coco_sequence.yaml --input input.png --output result.png
ModuleNotFoundError: No module named 'MultiScaleDeformableAttention'
https://github.com/lalalafloat/VNext/blob/main/projects/IDOL/demo_idol.py
python projects/IDOL/demo_idol.py --config-file projects/IDOL/configs/coco_pretrain/r50_coco_sequence.yaml --input input.png --output result.png
ModuleNotFoundError: No module named 'MultiScaleDeformableAttention'
https://github.com/lalalafloat/VNext/blob/main/projects/IDOL/demo_idol.py
python projects/IDOL/demo_idol.py --config-file projects/IDOL/configs/coco_pretrain/r50_coco_sequence.yaml --input input.png --output result.png
Can you sucessfully run demo visualize ??
I tried the above command. However, the '--confidence-threshold' is not active. Image output visual is not the same as the paper's description: https://github.com/lalalafloat/VNext#visualization-results-on-ovis-valid-set
https://github.com/lalalafloat/VNext/blob/main/projects/IDOL/demo_idol.py
python projects/IDOL/demo_idol.py --config-file projects/IDOL/configs/coco_pretrain/r50_coco_sequence.yaml --input input.png --output result.png
i try to run on video inference visulize ,but failed.
I tried to run python projects/IDOL/demo_idol.py but it is not finding the checkpoint. Have you any idea in which folder I have to put the pre-trained model?
Thanks
https://github.com/wjf5203/VNext/issues/11#issuecomment-1230384604
I tried to run python projects/IDOL/demo_idol.py but it is not finding the checkpoint. Have you any idea in which folder I have to put the pre-trained model?
Thanks
You can choose any folder to put the checkpoint, but you need to specify its weight in the configuration file. For example https://github.com/lalalafloat/VNext/blob/d965e0f7af3ecdfdcf74ba14e0ad5443c06b68cc/projects/IDOL/configs/coco_pretrain/r50_coco_sequence.yaml#L13
Sorry for replying so late. You can go to the folder "projects/IDOL/idol/models/ops", and then execute the following command: bash make.sh. If you compile successfully, and you can fix the bug.
Yes, i can visualize detection results successfully, what problems or bugs do you met?
Please continue in this issue:
https://github.com/wjf5203/VNext/issues/11#issuecomment-1230384604
@lalalafloat thanks for your suggestions, but I don't understand why I have this kind of output. Thanks for your help.
@CarlHuangNuc Hello. Have you solved the issue to run this code on video?
@assia855 have you figured out the problem? I'm experiencing a similar issue

I've updated the demo_idol.py from lalalafloat to visualize on videos. I've set is_multi_cls to False to match the IDs to the pred_scores. My forked repo is over here https://github.com/reno77/VNext . Cmd to infer on videos is : python projects/IDOL/demo_idol.py --config-file projects/IDOL/configs/ovis_swin.yaml --video-input input.mp4 --output output1.mp4