Participant %12: Team COINSE, KAIST
Created for Team COINSE (Gabin An, Shin Yoo) from KAIST, South Korea, for discussions. Welcome!
Hi, you can check our program for intermediate ranking at here. I just invited @chenzimin and @monperrus as collaborators :)
Hi
To ensure fairness, I will use the latest commit before 2018-07-04 23:59 CEST for the intermediate ranking.
Hi, I have a question. Is the score you've posted Average Line Error?
Total files: 17132
Average line error: 0.11328093683827162 (the lower, the better)
Recall@1: 0.8841349521363531 (the higher, the better)
This is the result of our program on the dataset 4, and our posted score is 0.0884xx.
Oh, sorry. That was the wrong program :( There's no problem at all.
I just ran my real program on Dataset 4, and here is the output:
Total files: 17132
Average line error: 0.08635175672664033 (the lower, the better)
Recall@1: 0.9124445482138688 (the higher, the better)
The loss value is slightly different from the posted score.
The program does not involve any randomness.
I used the evaluate.py file from the latest commit.
Could you check it? :)
Hi,
I run your program again and got the same result as before:
Total files: 17132
Average line error: 0.0884776175201 (the lower, the better)
Recall@1: 0.910226476769 (the higher, the better)
Can you check the md5 checksum of predictor.py? Just to be sure that we have the same version. My version (pulled from your repo)
43a98cc7afc3c3c494ee670ec0b17eb4 predictor.py
Oh, I slightly modified the file after submitting, but I haven't noticed it... :( Sorry and thank you for checking again!