CSMGAN
CSMGAN copied to clipboard
Something wrong with reproducing this code
hello,i noticed that your pretrained model with a subsript like model_4274 on tacos and model_6852 on activity,does it means you finished training the model on 4274 and 6852 epoch respectively? Now im trying to retrain it without load any pretrained model,but it never arrive this line and the performence get worse and worse after epoch100+,my main.py hyper-parameters are all your defualt set on README.MD.Are you got a gradually improvement when you train it with the same hypara.expecting a reply think you.
And have anyone successsfully reproduced the code?
hello,i noticed that your pretrained model with a subsript like model_4274 on tacos and model_6852 on activity,does it means you finished training the model on 4274 and 6852 epoch respectively? Now im trying to retrain it without load any pretrained model,but it never arrive this line and the performence get worse and worse after epoch100+,my main.py hyper-parameters are all your defualt set on README.MD.Are you got a gradually improvement when you train it with the same hypara.expecting a reply think you.
Hi, ActivityNet only needs 20 epochs and TACoS needs 60 epochs.
And have anyone successsfully reproduced the code?
Hi,I used the default configuration in README for training, but I could not reach the precision in the paper. Have you reproduced it successfully now?
I'm sorry I gave up on this model after a lot of trying, but there are a lot of papers in 2021 that suggest a lot better ways to do it
Sorry for getting back to you so late. Thank you for your reply and hope your research goes well.
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2021年6月30日(星期三) 晚上8:43 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [liudaizong/CSMGAN] Something wrong with reproducing this code (#6)
I'm sorry I gave up on this model after a lot of trying, but there are a lot of papers in 2021 that suggest a lot better ways to do it
— You are receiving this because you commented. Reply to this email directly, view it on GitHub, or unsubscribe.