InitialBug
InitialBug
@hengruo the reason for low score may be that your optim scheduler has some problem, I print the learning rate of the optimizer but it seems that it doesn't increase...
@hengruo I have tried many different hyperparameters, but I think the root problem is the optim function, you can simply try using Adam with fixed learning rate 0.001. I think...
@hengruo Any good news? I tested many different hyperparameters, but the best F1 score is only 64.3 as so far. I wonder if the model has some problem?
@BangLiu The best F1 score is aroud 66, but when I keep training, the model overfits. I have re-impemented the part of the model.py, but most of the modules are...
I have fixed the errors. You can have a try now.
try -dynamic_dict or add the -tgt argument, it decides on whether you want to expose the target vocabulary.
I'm sorry that I can't solve this I problem because I didn't use any other techniques except for the naive attention machanism in my paper. I think you can check...
You can use the index file to get the templates (summaries of the corresponding training article), and each line contains 30 indices of one sample. The score is the ROUGE-1...
The score.json looks like this, [{"art_idx":"0","scores":[0.25,0.1333333333,0.25,0.25,0.1111111111,0.125,0.1428571429,0.2666666667,0.2857142857,0.2666666667,0.2666666667,0.2,0.25,0.125,0.2666666667,0.2666666667,0.2666666667,0.2,0.1538461538,0.375,0.625,0.25,0.5,0.2666666667,0.125,0.5333333333,0.2666666667,0.2666666667,0.25,0.125],"tp_idx":[280563,468740,2977802,2978740,1305283,810428,143628,3305902,96755,227145,227356,228893,2668569,2669230,2669605,2579854,2579826,86884,54116,88311,186211,342885,414963,558914,1305361,897042,2608945,2832328,554728,98514]}] Actually I didn't use the sample.index.json in my code, it is the previous version which I forgot to delete.