codrep-2019 icon indicating copy to clipboard operation
codrep-2019 copied to clipboard

participant %1: Egor Bogomolov, Jetbrains Research

Open monperrus opened this issue 5 years ago • 4 comments

Warm welcome to @egor-bogomolov and colleagues!!

monperrus avatar Apr 26 '19 05:04 monperrus

It's time to report the very first results! As explained in #5, I'm struggling with ±1 character mistakes. That's why I report two results:

  • MAP = 0.342 – evaluation as it is
  • MAP = 0.435 – evaluation that counts mistake by 1 as a correct answer

NB: I believe, that the metric is called mean reciprocal rank (MRR), not mean average precision.

egor-bogomolov avatar Apr 28 '19 22:04 egor-bogomolov

Update:

  • ±1 MAP = 0.654

egor-bogomolov avatar Apr 28 '19 23:04 egor-bogomolov

Impressive result!

NB: I believe, that the metric is called mean reciprocal rank (MRR), not mean average precision.

Thanks, I did not know that. It seems that MRR is just a special case of MAP, where we only have one ground-truth item.

chenzimin avatar Apr 30 '19 09:04 chenzimin

After the dataset fix, I've recomputed the score. The best so far is:

  • MRR = 0.7539

egor-bogomolov avatar May 09 '19 08:05 egor-bogomolov