Lei Liu
Lei Liu
> Hi Kevin, how did you resolve this error? I have exactly the same problem. Me too
Not yet ------------------ 原始邮件 ------------------ 发件人: "Tencent/FaceDetection-DSFD" ***@***.***>; 发送时间: 2021年7月22日(星期四) 下午2:34 ***@***.***>; ***@***.******@***.***>; 主题: Re: [Tencent/FaceDetection-DSFD] Use new-style autograd function with static forward method. (#63) Hi Kevin, how did you resolve this error? I...
> @fatly > Hi!✋ > In my case, I confirmed the same error when the calculation result of annotation center_w, center_h was incorrect. > After that, when I recalculated correctly,...
> 如果你的评价指标是spearman相关系数,那么它只依赖于预测结果的相对顺序,跟范围无关,比如将每个预测结果从y改为y**3+10,spearman相关系数不会变化; > > 如果你的评价指标是pearson相关系数,那么它只依赖于预测结果的线性关系,跟范围无关,也就是说如果将每个预测结果做同一个线性变换(比如3y+1),pearson相关系数不会变化; > > 如果你非要一个[1,5]之间的指标,那也很容易,比如(3 + 2 * cos)就能保证在1~5之间。 谢谢解答,如果是分类任务,如何评价向量的质量呢,比如两句话的关系是包含,无关,对立这种数据集,看STS也有这样的label
两个向量如何计算acc啊
> 哦,你说直接用NLI语料来评价无监督句向量?NLI数据本身就不是严格的相似度数据,这也不大好评价吧。 > > 比较简单的方法是将包含、无关、对立分别转化为得分1、0、-1,然后算spearman,可能会有一定的效果。比较靠谱的方式是用句向量作为输入特征,训练一个3分类模型,然后比较acc。 嗯啊,谢谢大佬,这么说无监督的方法,还是要靠一个分类模型去学习才能得到acc。目前我看到的bert向量降维方法就是whitening,请问还有其他bert向量的降维方法吗,还没有搜到相关文献
I have used val to test partA, but the mae is very high 0 1039.234375 1 3270.0751953125 2 4630.0849609375 3 7451.38427734375 4 13182.578125 5 14520.990844726562 6 20020.381469726562 7 34471.97229003906 8...
Thank you@ WangyiNTU Have you trained the model? How many epoch have you used to train the model in part A and part B
I train the model, but loss is nan, do you have the same problem Epoch: [0][300/1200] Time 0.415 (0.414) Data 0.018 (0.020) Loss nan (nan)
@WangyiNTU Thank you for your help, I trained from the start, the loss is nan, I will try to solve this problem