WaNePr
WaNePr
To evaluate the performance, one can either count the final answerMID or count the topicentitymid and the inferential chain as correct, which way you used ?
I assume that finding the 'answermid' is equivalent to finding 'topicentitymid+inferentialchain', Is it ?
Another question is Mediator Nodes are not some nodes " do not have a name or alias associated with it" as described in your paper, right? It has an actually...
Is it common in the whole dataset for which we are not able to uniquely determine the answer to a question by querying freebase subset with the corresponding topicentitymid and...
> 谢谢您的项目,真的很棒。我这几天简单看了一下BertForTokenClassification源码。发现一个可能的原因。 > 在0.4.0中,bertForTokenClassification的前向传播是这样的 >  > 在0.6.0中,其是这样的 >  > 可以看到,在0.6.0中,求loss时进行了mask,但是0.4.0中没有。我试了一下,在0.4.0版本下,显示调用bertForTokenClassificaiton的前向传播。若求loss进行mask。则效果会变得很差。 > 具体原因还在分析。 请问您分析出来是什么原因导致mask 之后的loss变得很差?