hf2vad
hf2vad copied to clipboard
Reproduce results on ShanghaiTech
Hi, Thank you for your contribution and for providing the code!
Unfortunately I am not able to reproduce the results with the model pretrained on the ShanghaiTech dataset. I have performed all the pre-processing steps as you have clearly explained here and used the same pre-trained cascade RCNN and Flownet2 weights.
Could you provide some information about how you extracted frames from the original videos in the training dataset? I have done it in the following way:
For each of the videos, I created a folder named as the base name of the file, where I extracted the frames using ffmpeg. For example, for the video 01_001.avi
I created a folder named 01_001
and I extracted the frames using the command ffmpeg -r 1 -i 01_001.avi -r 1 -start_number 0 "01_001/%03d.jpg"
. As a result, the training folder is organized in the same way as the testing folder.
Many thanks in advance.
I remember the frames are provided by the SH-Tech?
I have downloaded the dataset from the link as suggested. The testing directory contains frames while the training one constains videos in .avi format. Is the source correct or you are talking about a different one? If so could you kindly provide me the link that you have used? Thank you.
Hi,thank you for your contribution and for providing the code! I have met the same questions,The testing directory contains frames while the training one only contains the .avi videos. Could you kindly help me deal with this problem? Thank you.
@benedettaliberatori , @Meenn
Oh, yes, the training directory only contains .avi videos. I extracted the frames by ffmpeg
, using command like ffmpeg -i <video_name> -qscale:v 1 -qmin 1 <video_name/%04d.jpg>
. The resulting frames consume about 25GB of disk space for me, which may not be very convinient to share.
Please note that the frame numbers of some videos are more than a thousand, so I think %04d
is more reasonable.
Thanks for your reply, i will try it!
@benedettaliberatori @Meenn
Don't forget to check out the recent commit before preprocessing the dataset.