two-stream-fusion-for-action-recognition-in-videos
two-stream-fusion-for-action-recognition-in-videos copied to clipboard
I fixed some bug and start training. How can I test it? And how to train our own model using own data? Thank you!
First thanks for your code ! The problem was found when I am running average_fusion.py. "videoname" variable is located in split_train_test_video.py, line 54. Again Thanks.
Hi, Thanks for providing such great code. Could you share the environment settings for this code? Thanks in advance
There are many bugs in this code, I wonder whether you have released the wrong version. I will be very grateful if you check it.
Is there any way to test my video on the trained model?
When I run conv_fusion.py it has problems: ==> (Training video, Validation video):( 11 0 ) Eligible videos for training : 10 videos Eligible videos for validation: 0 videos Epoch 0/19...