self-critical.pytorch icon indicating copy to clipboard operation
self-critical.pytorch copied to clipboard

Issue about the performance

Open coldmanck opened this issue 6 years ago • 4 comments

Hi @ruotianluo

Thank you for your fantastic work. I followed your implementation to train the captioning model (TopDown) with the top 36 features I downloaded from the author. And below is the result: screen shot 2018-01-31 at 8 30 40 pm (The second row result is took from your README and the third row is my result.) Do you have any idea why the performance still can not reach the reported standard?

coldmanck avatar Jan 31 '18 12:01 coldmanck

Did you match the hidden size? Are you using the same batch size? Use 20-100 instead of 36.

ruotianluo avatar Jan 31 '18 14:01 ruotianluo

Yes I found I did not follow the same hyper parameters (hidden size, batch_size, learning rate decay). I will fix them and try again. Thanks very much! :)

coldmanck avatar Jan 31 '18 15:01 coldmanck

Yes I found I did not follow the same hyper parameters (hidden size, batch_size, learning rate decay). I will fix them and try again. Thanks very much! :)

hello, what's the performance of your new try? I also encountered this problem.

loriqing avatar Apr 11 '19 02:04 loriqing

@coldmanck may I ask if "w/o bottom-up features" means using 14142048 features extracted from resner101?

jamiechoi1995 avatar Jun 06 '19 09:06 jamiechoi1995