multilingual-modeling
multilingual-modeling copied to clipboard
Inconsistent Evaluation Results
I am getting different results by running training/eval together and separately.
Rerunning evaluation after training (by removing --do_train) gives me a better result than running training+eval together.