Universal-Transformer-Pytorch
Universal-Transformer-Pytorch copied to clipboard
Unable to reproduce results (tested on Task 1 & 2)
Hi,
I ran the experiments on the 10K setting, but my results are way worse than the reported ones.
I didn't change any of the default parameters except from setting the tenK
param in main.py, line 64 to True
. Then I ran python main.py --act --verbose --cuda
.
There are no errors and the results from 10 runs are: Task 1 Noam False ACT True Task: 1 Max: 0.492 Mean: 0.42350000000000004 Std: 0.0808062497582953 Task 2 Noam False ACT True Task: 2 Max: 0.323 Mean: 0.26880000000000004 Std: 0.04480580319556831
I have not tried the other tasks (but at least task 3 seems to be the same) as something seems to be going wrong generally. The results are equal in a non-cuda setup and worse without act enabled.
I'm running with the following versions: python 3.6.8 pytorch 0.4.0 (also tried 0.4.1 and 1.0.0) torchtext 0.3.1 argparse 1.4.0
Thanks for your help!
Hi,
yah, I tried again, and I cannot reproduce the same results. I think I used different hyper-parameter. I did not sync what I had in my server, so I need to re-run some hyperparameters.
Sorry for the inconvenience, I will work on this in the coming days.
Andrea
Hi has this been resolved? I have not been able to reproduce the posted results.