pytorch-vsumm-reinforce icon indicating copy to clipboard operation
pytorch-vsumm-reinforce copied to clipboard

Results lesser than the original implementation.

Open divamgupta opened this issue 6 years ago • 21 comments

Looks like this implementation is yielding worse results compared to the paper and the theano implementation. What could be the reason? and any ideas on how to fix this?

Thank You!

divamgupta avatar Jul 24 '18 10:07 divamgupta

I also get only 35,54% average F-score for a 5-fold on SumMe!

mctigger avatar Sep 07 '18 14:09 mctigger

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

divamgupta avatar Sep 08 '18 15:09 divamgupta

Wow Any idea @KaiyangZhou ?

mctigger avatar Sep 09 '18 13:09 mctigger

sorry for not responding promptly, too busy with deadlines.

typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.

i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).

pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results

KaiyangZhou avatar Sep 09 '18 19:09 KaiyangZhou

For the TVSum dataset, the results of the 0.5 vector baseline are better than the RL model. I strongly believe that they should have been included in the paper.

divamgupta avatar Sep 12 '18 05:09 divamgupta

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

Does the pretrained Theano model also only give 0.5 scores or only the PyTorch model you trained yourself?

mctigger avatar Sep 12 '18 11:09 mctigger

pytorch implementation doesn't strictly follow the 5-fold setting,

Can you elaborate?

mctigger avatar Sep 12 '18 11:09 mctigger

Yes , using their pretrained theano model only.

https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py After line 80, replace probs ( the values predicted by the model ) by adding probs = np.zeros_like(probs). astype ( float ) + 0.5

divamgupta avatar Sep 12 '18 12:09 divamgupta

Yes , using their pretrained theano model only.

https://github.com/KaiyangZhou/vsumm-reinforce/blob/master/vsum_test.py After line 80, replace probs ( the values predicted by the model ) by adding probs = np.zeros_like(probs). astype ( float ) + 0.5

So this actually means the Theano and PyTorch model are both not able to reproduce the results of the paper or train correctly?

mctigger avatar Sep 12 '18 12:09 mctigger

The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .

divamgupta avatar Sep 12 '18 12:09 divamgupta

The theano model reproduces the numbers shown in the paper correctly. But not using the RL model gives the same results as well .

Does it also reproduce the numbers when trained from scratch with Theano?

mctigger avatar Sep 12 '18 13:09 mctigger

I tried the pre-trained model

divamgupta avatar Sep 12 '18 13:09 divamgupta

When using the PyTorch implementation without folds (instead train and test on the same videos) I can get 40.1% on SumMe!

mctigger avatar Sep 12 '18 14:09 mctigger

sorry for not responding promptly, too busy with deadlines.

typically, learning with RL is quite tricky and hard to judge how well the agent is learned unless we have a deterministic metric to exactly measure each action that it takes. to improve performance, you might wanna try more epochs (e.g. 200). as the learning process in video summarization is essentially a combinatorial optimization problem where the agent tries different combinations to see which one is rewarded the most, it is natural to increase the training epochs.

i also found the range of scores is not diverse. however, what i found was that the score curve is actually more important, from which we can see which parts are scored more and which parts are relatively less important (we have a score plot in the aaai'18 paper). the scale wouldn't matter too much if the important frames are scored higher. it is normal that random scores could produce reasonably good results, this is because the shot summaries are obtained by a post processing algorithm, i.e. knapsack. if you feed a score vector of 0.5, the first few shots will be selected by the knapsack, which could bring some (perhaps good) results as expected (pls refer to evaluation code which explains more clearly how the performance is measured).

pytorch implementation doesn't strictly follow the 5-fold setting, that is, different folds overlap, so please use this code for further research rather than reproducing the paper results

If I understand correctly wouldn't this also mean that the knapsack-algo will favor short shots? If the score curve is looking good, but the scores are still very much around 0.5 this means a short shot with an average close to 0.5 will always preferred compared to a long shot with an average greater than 0.5?

mctigger avatar Sep 19 '18 12:09 mctigger

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?

YairShemer avatar Oct 02 '18 06:10 YairShemer

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

That's a bit strange that a naive algorithm that uses the same weight for all frames achieves close to state of the art results... Does anyone have any explanations for this?

And here is the answer (from CVPR 2019): Rethinking the Evaluation of Video Summaries

YairShemer avatar Apr 28 '19 09:04 YairShemer

@KaiyangZhou . i find this model learned nothing, i use the random initialized model to evaluate. I just got 41.7 but i just got 41.2 after 200 epochs' training.

zouying-sjtu avatar Jun 25 '19 06:06 zouying-sjtu

could you please release the code, especially for the part about how to extract feature and get change points.

zouying-sjtu avatar Jun 25 '19 06:06 zouying-sjtu

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

@divamgupta Is this the reason below for learning nothing? https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/rewards.py#L15-L16

detach...

pcshih avatar Aug 10 '19 05:08 pcshih

Actually the model is not learning anything. I ran the pre-trained model of the theano implementation and it is giving the numbers as expected. But the funny thing is that all the scores generated by the model are close to 0.5 . If you change a single line where you replace the model prediction numpy array by a numpy array ( of the same shape) of of 0.5, it is giving the same results!!!!

@divamgupta Is this the reason below for learning nothing?

https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/rewards.py#L15-L16

detach...

have you try without detach ? your evaluation score can raise up while training? The dataset is too small, 20 for train, 5 for test. Essentially, the score in 5 test video vary a lot.

zouying-sjtu avatar Aug 14 '19 09:08 zouying-sjtu

actions are from below...

https://github.com/KaiyangZhou/pytorch-vsumm-reinforce/blob/fdd03be93f090278424af789c120531e49aefa40/main.py#L125

Although Bernoulli have grad_fn in pytorch, it's grad is zero.

https://github.com/pytorch/pytorch/blob/6dfecc7e01842bb7e5024794fdce94480de7bb3a/tools/autograd/derivatives.yaml#L181

So even you remove detach, it does not help...

pcshih avatar Aug 15 '19 03:08 pcshih