OpenNMT-py icon indicating copy to clipboard operation
OpenNMT-py copied to clipboard

computation of validation accuracy in training vs. split accuracy in translation

Open rfernand2 opened this issue 2 years ago • 1 comments

When I debug a training run (train.py), it appears that the validation accuracy is being calculated by comparing all predicted words in the batch to all target words (independent of sentence/sample boundaries).

When I debug a translation run (translate.py), it appears to calculate the accuracy by counting the exact sentence matches (divided by the total sentences).

I would expect both of these to use the same method, and for it to default to "exact match" (as done in translation). Can you verify if my observations are accurate and, if so, can you change training to use "exact match" also?

rfernand2 avatar Mar 29 '22 17:03 rfernand2

I don't remember of such an "exact match" accuracy in translation. Which part of the code are you referring to? Anyways, such exact match can't be done easily during validation, since no "true" translation is done at this point. #1641

francoishernandez avatar Apr 06 '22 14:04 francoishernandez