[Bug] How to use video or video frames to draw heat maps in version 1.x?
Branch
main branch (1.x version, such as v1.0.0, or dev-1.x branch)
Prerequisite
- [X] I have searched Issues and Discussions but cannot get the expected help.
- [X] I have read the documentation but cannot get the expected help.
- [X] The bug has not been fixed in the latest version.
Environment
I have all the required environments for version 2.1 installed.
Describe the bug
You also have the visual heat map code 'demo_gradcam.py' in mmaction2.0, but it seems to be missing in the demo after the update version mmaction2.1, but another 'gradcam_utils.py' in the utils file, which is not more convenient to do the heat map. Is there any solution? Or tell me how to use the heat map method of version mmaction2.1. Thank you very much for your answer!
Reproduces the problem - code sample
No response
Reproduces the problem - command or script
No response
Reproduces the problem - error message
We used this demo command before:
python demo/demo_gradcam.py configs/recognition/i3d/i3d_r50_video_inference_32x2x1_100e_kinetics400_rgb.py \ checkpoints/i3d_r50_video_32x2x1_100e_kinetics400_rgb_20200826-e31c6f52.pth demo/demo.mp4 \ --target-layer-name backbone/layer4/1/relu --fps 10 \ --out-filename demo/demo_gradcam.gif
Additional information
I hope you can help me answer my question, thank you very much!
Question:
I found and solved the visualized code in 'tools/visualizations/vis_cam.py', but when I was using 'demo/demo.py', why can't I input video frames?
The command I used was:
python demo/demo.py configs/recognition/user/tpn-slowonly_imagenet-pretrained-r50_8xb8-8x8x1-150e_kinetics400-rgb.py work_dirs/New_Data/frames_224*224/dataPlus_time449/tpn-slowonly-User-batchsize12-SGD-lr0.01-200e-ST/epoch_84.pth demo/label.txt demo/J3CFLHOC_diffu/J3CFLHOC_frame --font-scale 10 --font-color red --out-filename demo/J3CFLHOC.gif
The error is:
Traceback (most recent call last): File "demo/demo.py", line 158, in <module> main() File "demo/demo.py", line 120, in main pred_result = inference_recognizer(model, args.video) File "/data/LK/Video_Processing/mmaction2-main/mmaction/apis/inference.py", line 92, in inference_recognizer data = test_pipeline(data) File "/home/LK/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 59, in __call__ data = t(data) File "/home/LK/anaconda3/envs/openmmlab/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in __call__ return self.transform(results) File "/data/LK/Video_Processing/mmaction2-main/mmaction/datasets/transforms/loading.py", line 679, in transform total_frames = results['total_frames'] KeyError: 'total_frames'
I use video frames as input to run the demo, but this script seems to be different from the 0.x version, the previous version can use video frames as input, how should this problem be solved?
Did you solve this issue @KunLiam ? I encountered the same issue and don't know how to deal with it