codrep-2019
codrep-2019 copied to clipboard
participant %1: Egor Bogomolov, Jetbrains Research
Warm welcome to @egor-bogomolov and colleagues!!
It's time to report the very first results! As explained in #5, I'm struggling with ±1 character mistakes. That's why I report two results:
- MAP = 0.342 – evaluation as it is
- MAP = 0.435 – evaluation that counts mistake by 1 as a correct answer
NB: I believe, that the metric is called mean reciprocal rank (MRR), not mean average precision.
Update:
- ±1 MAP = 0.654
Impressive result!
NB: I believe, that the metric is called mean reciprocal rank (MRR), not mean average precision.
Thanks, I did not know that. It seems that MRR is just a special case of MAP, where we only have one ground-truth item.
After the dataset fix, I've recomputed the score. The best so far is:
- MRR = 0.7539