pytorch-vsumm-reinforce
pytorch-vsumm-reinforce copied to clipboard
Results lesser than the original implementation.
Looks like this implementation is yielding worse results compared to the paper and the theano implementation. What could be the reason? and any ideas on how to fix this?
Thank You!
I also get only 35,54% average F-score for a 5-fold on SumMe!
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
Wow Any idea @KaiyangZhou ?
sorry for not responding promptly, too busy with deadlines.
typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.
i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).
pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results
For the TVSum dataset, the results of the 0.5 vector baseline are better than the RL model. I strongly believe that they should have been included in the paper.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
Does the pretrained Theano model also only give 0.5 scores or only the PyTorch model you trained yourself?
pytorch implementation doesn't strictly follow the 5-fold setting,
Can you elaborate?
Yes , using their pretrained theano model only.
https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py After line 80, replace probs ( the values predicted by the model ) by adding probs = np.zeros_like(probs). astype ( float ) + 0.5
Yes , using their pretrained theano model only.
https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py After line 80, replace probs ( the values predicted by the model ) by adding probs = np.zeros_like(probs). astype ( float ) + 0.5
So this actually means the Theano and PyTorch model are both not able to reproduce the results of the paper or train correctly?
The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .
The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .
Does it also reproduce the numbers when trained from scratch with Theano?
I tried the pre-trained model
When using the PyTorch implementation without folds (instead train and test on the same videos) I can get 40.1% on SumMe!
sorry for not responding promptly, too busy with deadlines.
typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.
i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).
pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results
If I understand correctly wouldn't this also mean that the knapsack-algo will favor short shots? If the score curve is looking good, but the scores are still very much around 0.5 this means a short shot with an average close to 0.5 will always preferred compared to a long shot with an average greater than 0.5?
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?
And here is the answer (from CVPR 2019): Rethinking the Evaluation of Video Summaries
@KaiyangZhou . i find this model learned nothing, i use the random initialized model to evaluate. I just got 41.7 but i just got 41.2 after 200 epochs' training.
could you please release the code, especially for the part about how to extract feature and get change points.
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
@divamgupta Is this the reason below for learning nothing? https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/rewards.py#L15-L16
detach...
Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!
@divamgupta Is this the reason below for learning nothing?
https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/rewards.py#L15-L16
detach...
have you try without detach ? your evaluation score can raise up while training? The dataset is too small, 20 for train, 5 for test. Essentially, the score in 5 test video vary a lot.
actions are from below...
https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/main.py#L125
Although Bernoulli have grad_fn in pytorch, it's grad is zero.
https://github.com/pytorch/pytorch/blob/6dfecc7e01842bb7e5024794fdce94480de7bb3a/tools/autograd/derivatives.yaml#L181
So even you remove detach, it does not help...