NER-Evaluation
NER-Evaluation copied to clipboard
An implementation of a full named-entity evaluation metrics based on SemEval'13 Task 9 - not at tag/token level but considering all the tokens that are part of the named-entity
Hi, I found an issue when counting `spurious`. In lines 317-322 in `ner_eval.py`, if a spurious entity is found, `+1` is performed for all types of entities. Shoud it be...
Fixed the spurious to make it according with the note in line 310. If the spurious detection label is in the label-set, +1 in the spurious count for this label,...
The breaks you have included in the loops for Scenario V: and Scenario VI: induce errors in counting overlapping entities.
Hi, I assumed find_overlap is supposed to find if two ranges have any portion in common, am I wrong? In your function if input is true_range = range(1, 2) and...
Hi, I think there are few scenarios that you are not updating true_which_overlapped_with_pred properly, and break your for loops. For example in line 267 of ner_eval.py file, you are appending...
pred_range = range(pred.start_offset, pred.end_offset) so if pred.start=1 and pred.end_offset=1, then it will be empty range. But it should be one. to fix: pred_range = range(pred.start_offset, pred.end_offset+1)