METEOR comms errror?
Hi,
you read the score back from METEOR jar on this line: https://github.com/tylin/coco-caption/blob/master/pycocoevalcap/meteor/meteor.py#L68
but I believe METEOR wants to give the score twice (and should be read twice), because it first gives back score for all calls of SCORE, and then it gives the average score for all calls to SCORE. In any case, this function doesn't seem to be used right now in the implementation, but if anyone wanted to use it just to evaluate a single sentence against a few reference (e.g. me :) ), then the call
score = float(self.meteor_p.stdout.readline().strip())
should be repeated once again right after the first one (they both give the same result). In any case, just a quick note, but it doesn't seem this code is used.
Also, curious why compute_score doesn't use _score, simply by iterating over all keys in res or gts, and averaging up the results. It seems the code would be 30 lines shorter. Is it maybe slower that way?
Thanks for pointing this out! Will check/fix that.
-Xinlei
On Fri, Apr 1, 2016 at 10:52 PM, Andrej [email protected] wrote:
Also, curious why compute_score doesn't use _score, simply by iterating over all keys in res or gts, and averaging up the results. It seems the code would be 30 lines shorter. Is it maybe slower that way?
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-204634803
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error. meteor.py line 41 to 43.
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd [email protected] wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd [email protected] wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701
Fixed that, and just to answer you question, yes we also modified a bit of the java code to make it faster, therefore it is slightly different from the meteor jar file provided by the original jar file.
-Xinlei
On Fri, Apr 1, 2016 at 10:52 PM, Andrej [email protected] wrote:
Also, curious why compute_score doesn't use _score, simply by iterating over all keys in res or gts, and averaging up the results. It seems the code would be 30 lines shorter. Is it maybe slower that way?
— You are receiving this because you are subscribed to this thread. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-204634803
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd [email protected] wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd [email protected] wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features
uneven minibath chunking, overall 20, last one 11
uneven minibath chunking, overall 20, last one 8
init COCO-EVAL scorer
tokenization...
PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second.
PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second.
setting up scorers...
computing Bleu score...
{'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct':
[402, 227, 119, 55]}
ratio: 0.986590038312
Bleu_1: 0.770
Bleu_2: 0.645
Bleu_3: 0.537
Bleu_4: 0.445
computing METEOR score...
Traceback (most recent call last):
File "metrics.py", line 202, in
2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd [email protected] wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
EVAL ||| 3.0 4.0 1.0 2.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 ||| 5.0 6.0 2.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 5.0 5.0 ||| 4.0 6.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 5.0 3.0 2.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 9.0 3.0 5.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 5.0 5.0 ||| 6.0 7.0 3.0 2.0 2.0 2.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 5.0 5.0 ||| 5.0 5.0 2.0 2.0 3.0 3.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 6.0 6.0 3.0 4.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 1.0 1.0 2.0 1.0 4.0 4.0 ||| 4.0 4.0 2.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 7.0 5.0 4.0 3.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 6.0 5.0 3.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 4.0 4.0 2.0 2.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 4.0 3.0 2.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 6.0 5.0 3.0 2.0 3.0 3.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 6.0 5.0 3.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 3.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 6.0 6.0 ||| 3.0 8.0 1.0 4.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 4.0 3.0 2.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 4.0 4.0 2.0 2.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 0.0 0.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 3.0 3.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 1.0 6.0 6.0 ||| 7.0 5.0 4.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 2.0 4.0 4.0 ||| 5.0 6.0 2.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 4.0 3.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 3.0 5.0 1.0 3.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 1.0 2.0 3.0 ||| 7.0 6.0 4.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 0.0 2.0 5.0 4.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 7.0 3.0 4.0 1.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 2.0 2.0 ||| 5.0 4.0 2.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 2.0 2.0 2.0 ||| 7.0 8.0 4.0 4.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 1.0 3.0 4.0 ||| 6.0 6.0 3.0 3.0 3.0 3.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.0 6.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 5.0 6.0 2.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 0.0 0.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 3.0 3.0 ||| 4.0 4.0 2.0 2.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 2.0 2.0 ||| 4.0 3.0 2.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 0.0 1.0 4.0 3.0 ||| 6.0 4.0 3.0 2.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 5.0 4.0 2.0 1.0 2.0 2.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 4.0 3.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 5.0 3.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 4.0 7.0 2.0 4.0 2.0 2.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 9.0 3.0 5.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 6.0 6.0 3.0 3.0 3.0 3.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.0 6.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 4.0 3.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 6.0 5.0 3.0 2.0 3.0 3.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 3.0 4.0 4.0 ||| 3.0 7.0 2.0 4.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 2.0 1.0 1.0 3.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 6.0 4.0 3.0 2.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 3.0 3.0 ||| 4.0 5.0 2.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 8.0 3.0 4.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 6.0 5.0 3.0 2.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 2.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 6.0 6.0 ||| 6.0 8.0 3.0 4.0 3.0 3.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 6.0 6.0 ||| 4.0 3.0 2.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 ||| 6.0 5.0 3.0 2.0 2.0 2.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 7.0 7.0 4.0 3.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 6.0 6.0 3.0 3.0 3.0 3.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 5.0 7.0 2.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 4.0 4.0 ||| 4.0 4.0 2.0 2.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 3.0 3.0 ||| 4.0 6.0 2.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 5.0 4.0 2.0 2.0 1.0 1.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 3.0 6.0 1.0 3.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 3.0 6.0 2.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 2.0 ||| 6.0 6.0 3.0 2.0 1.0 1.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 2.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 3.0 3.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.0 6.0 ||| 3.0 6.0 1.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 2.0 ||| 6.0 8.0 3.0 4.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 6.0 6.0 ||| 6.0 5.0 3.0 2.0 3.0 3.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 6.0 7.0 3.0 4.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 5.0 5.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 1.0 6.0 6.0 ||| 4.0 4.0 2.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 3.0 4.0 1.0 2.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 3.0 6.0 1.0 3.0 2.0 2.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 6.0 5.0 3.0 2.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 3.0 3.0 ||| 4.0 5.0 2.0 2.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 2.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 6.0 6.0 3.0 3.0 3.0 3.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 6.0 6.0 ||| 3.0 4.0 2.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 1.0 1.0 3.0 3.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 6.0 4.0 3.0 2.0 0.0 0.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 2.0 2.0 2.0 ||| 6.0 5.0 3.0 2.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 3.0 3.0 ||| 5.0 8.0 2.0 4.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 1.0 0.0 2.0 4.0 4.0 ||| 6.0 7.0 3.0 3.0 1.0 1.0 2.0 2.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 4.0 4.0 ||| 3.0 3.0 2.0 1.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 2.0 2.0 ||| 6.0 6.0 3.0 3.0 2.0 2.0 3.0 3.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 5.0 5.0 ||| 3.0 4.0 1.0 2.0 1.0 1.0 1.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 1.0 0.0 1.0 1.0 3.0 4.0
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini [email protected]:
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py", line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output. 2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd [email protected] wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
i also printed with ' fil.write('{}\n'.format(eval_line))' the result was exactly the same. and printed 'fil.write((self.meteor_p.stdout.readline().strip())) ' also tried ' for i in range(0,len(imgIds)): fil.write(self.meteor_p.stdout.readline().strip()) print(self.meteor_p.stdout.readline().strip()) #scores.append(float(self.meteor_p.stdout.readline().strip()))
' and the output was a empty file with 1 line, the gives me 100 empty lines. the len(imgIds) is 100.
2016-04-29 11:36 GMT-03:00 Guilherme Marchezini [email protected]:
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini [email protected]:
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py", line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output. 2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd [email protected] wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
Hmm could you check the input to the java file? One line means the input is empty...
-Xinlei
On Fri, Apr 29, 2016 at 11:55 AM, marchezinixd [email protected] wrote:
i also printed with ' fil.write('{}\n'.format(eval_line))' the result was exactly the same. and printed 'fil.write((self.meteor_p.stdout.readline().strip())) ' also tried ' for i in range(0,len(imgIds)): fil.write(self.meteor_p.stdout.readline().strip()) print(self.meteor_p.stdout.readline().strip()) #scores.append(float(self.meteor_p.stdout.readline().strip()))
' and the output was a empty file with 1 line, the gives me 100 empty lines. the len(imgIds) is 100.
2016-04-29 11:36 GMT-03:00 Guilherme Marchezini [email protected]:
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini <[email protected] :
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py", line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output.
2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd < [email protected]> wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215774925
Yes maybe, how big is the memory of your computer?
-Xinlei
On Fri, Apr 29, 2016 at 10:37 AM, marchezinixd [email protected] wrote:
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini [email protected]:
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py", line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output.
2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd <[email protected]
wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215738686
well i think i'm with 4gb, i'll download and test on a better computer that i was granted acess.
2016-05-02 2:34 GMT-03:00 Xinlei Chen [email protected]:
Yes maybe, how big is the memory of your computer?
-Xinlei
On Fri, Apr 29, 2016 at 10:37 AM, marchezinixd [email protected] wrote:
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini <[email protected] :
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py",
line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output.
2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd < [email protected]
wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215738686
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-216107952
how/where exactly i can check that?
2016-04-29 13:37 GMT-03:00 Xinlei Chen [email protected]:
Hmm could you check the input to the java file? One line means the input is empty...
-Xinlei
On Fri, Apr 29, 2016 at 11:55 AM, marchezinixd [email protected] wrote:
i also printed with ' fil.write('{}\n'.format(eval_line))' the result was exactly the same. and printed 'fil.write((self.meteor_p.stdout.readline().strip())) ' also tried ' for i in range(0,len(imgIds)): fil.write(self.meteor_p.stdout.readline().strip()) print(self.meteor_p.stdout.readline().strip()) #scores.append(float(self.meteor_p.stdout.readline().strip()))
' and the output was a empty file with 1 line, the gives me 100 empty lines. the len(imgIds) is 100.
2016-04-29 11:36 GMT-03:00 Guilherme Marchezini <[email protected] :
Also i'd like to point that the university computer that i'm using is not very powerfull, this could affect the results? ---------- Forwarded message ---------- From: Li Yao [email protected] Date: 2016-03-31 17:00 GMT-03:00 Subject: Re: Error on meteor.py To: Guilherme Marchezini [email protected]
I just ran it for you 'python metrics.py', here is what I have:
loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 180240.72 tokens per second. PTBTokenizer tokenized 614 tokens at 12807.37 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... METEOR: 0.299 computing Rouge score... ROUGE_L: 0.625 computing CIDEr score... CIDEr: 0.713 CIDEr: 0.713 Bleu_4: 0.445 Bleu_3: 0.537 Bleu_2: 0.645 Bleu_1: 0.770 ROUGE_L: 0.625 METEOR: 0.299 tokenization... PTBTokenizer tokenized 248650 tokens at 746907.23 tokens per second. PTBTokenizer tokenized 4122 tokens at 77436.02 tokens per second. setting up scorers... computing Bleu score... {'reflen': 3496, 'guess': [3453, 2783, 2113, 1443], 'testlen': 3453, 'correct': [2629, 1446, 720, 280]} ratio: 0.987700228833 Bleu_1: 0.752 Bleu_2: 0.621 Bleu_3: 0.506 Bleu_4: 0.397 computing METEOR score... METEOR: 0.289 computing Rouge score... ROUGE_L: 0.622 computing CIDEr score... CIDEr: 0.575 CIDEr: 0.575 Bleu_4: 0.397 Bleu_3: 0.506 Bleu_2: 0.621 Bleu_1: 0.752 ROUGE_L: 0.622 METEOR: 0.289 {'CIDEr': 0.71333128077975017, 'Bleu_4': 0.44461721461682724, 'Bleu_3': 0.5370004285027623, 'Bleu_2': 0.6446073608518809, 'Bleu_1': 0.7700444449712462, 'ROUGE_L': 0.62502827978890485, 'METEOR': 0.29888300164852194} {'CIDEr': 0.57527134381857159, 'Bleu_4': 0.39717801922974694, 'Bleu_3': 0.506390778364836, 'Bleu_2': 0.621178340686868, 'Bleu_1': 0.7519444615107921, 'ROUGE_L': 0.62158691569682389, 'METEOR': 0.28890374293039817}
2016-04-29 11:34 GMT-03:00 Guilherme Marchezini < [email protected] :
yes, the program gives this error in the first time it is called. Here is what the terminal show me:
'loading youtube2text googlenet features uneven minibath chunking, overall 20, last one 11 uneven minibath chunking, overall 20, last one 8 init COCO-EVAL scorer tokenization... PTBTokenizer tokenized 38623 tokens at 69821,56 tokens per second. PTBTokenizer tokenized 614 tokens at 8616,53 tokens per second. setting up scorers... computing Bleu score... {'reflen': 522, 'guess': [515, 415, 315, 215], 'testlen': 515, 'correct': [402, 227, 119, 55]} ratio: 0.986590038312 Bleu_1: 0.770 Bleu_2: 0.645 Bleu_3: 0.537 Bleu_4: 0.445 computing METEOR score... Traceback (most recent call last): File "metrics.py", line 202, in
test_cocoeval() File "metrics.py", line 198, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "metrics.py", line 91, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/piim/Documentos/Deeplearningextraction/arctic-capgen-vid-master/cocoeval.py",
line 43, in score score, scores = scorer.compute_score(gts, res) File "/usr/lib/pymodules/python2.7/pycocoevalcap/meteor/meteor.py", line 42, in compute_score scores.append(float(self.meteor_p.stdout.readline().strip())) ValueError: could not convert string to float: ' The creator of the code sent me an email showing what should be my output. And yes, before line 41 of meteor.py i printed ' fil = open("testprint.txt", "w") fil.write(eval_line) ' and the file is in annex. i'll send you the email where he shows what should be my output.
2016-04-28 23:09 GMT-03:00 Xinlei Chen [email protected]:
Hmm, so is the subprocess potentially killed after computing the meteor score once? Did it output anything? Does it happen the first time you call it, or the second++ time?
-Xinlei
On Thu, Apr 28, 2016 at 6:57 PM, marchezinixd < [email protected]> wrote:
I'm trying to run artic-capgen-vid, this algorithm on git https://github.com/yaoli/arctic-capgen-vid, it is using youtube2text_iccv15.
2016-04-28 17:52 GMT-03:00 Xinlei Chen [email protected]:
Hmm seems to be the output is not a number so it cannot be parsed as a float. What input do you give to it?
I will do a major update today/tomorrow to fix all the lingering issues.
-Xinlei
On Thu, Apr 28, 2016 at 3:49 PM, marchezinixd < [email protected]> wrote:
I'm having an issue in this part of the code. `for i in range(0, len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())` It is showing 'ValueError: could not convert string to float: ' I traced back, all the things and values are working properly, even when i 'print ({}\n.format(eval_line))' in the previous line it print what it seems to be the correct value, but when get the lines, 'scores.append' or 'score=' it throws that the error.
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <
https://github.com/tylin/coco-caption/issues/8#issuecomment-215541657
— You are receiving this because you commented. Reply to this email directly or view it on GitHub <
https://github.com/tylin/coco-caption/issues/8#issuecomment-215557701>
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215586990
— You are receiving this because you commented. Reply to this email directly or view it on GitHub < https://github.com/tylin/coco-caption/issues/8#issuecomment-215611734
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215774925
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/tylin/coco-caption/issues/8#issuecomment-215798827
@endernewton HI,when i sun this "self.meteor_p.stdin.write('{}\n'.format(eval_line))" i just get this error "IOError: [Errno 22] Invalid argument" can you help me resolve this problem?
@zhangzhizz You can try to reinstall java8, maybe it's the problem.