pytorch_bert_multi_classification icon indicating copy to clipboard operation
pytorch_bert_multi_classification copied to clipboard

基于pytorch_bert的中文多标签分类

Results 4 pytorch_bert_multi_classification issues
Sort by recently updated
recently updated
newest added

本实验是不是通过使用sigmoid函数实现的多标签文本分类? 如果是的话,是不是在dev和test中使用的阈值是0.6,而在predict中阈值是0.5?这是为什么呢 是不是与bert文本分类的区别就是使用Sigmoid函数而不是softmax?

请问,在用bert-base-case、chinese-bert-wwm-ext、chinese-roberta-wwm-ext、chinese-roberta-wwm-ext-large这几个预训练模型跑多标签分类实验的时候都没问题,为什么使用roberta-xlarge-wwm-chinese-cluecorpussmall这个预训练模型跑多标签分类实验,在训练过程中一直 accuracy:0.0000 micro_f1:0.0000 macro_f1:0.0000 为什么会出现这种现象?求解答

有空能传一下best.pt吗,感谢!

Some weights of the model checkpoint at ../model_hub/bert-base-chinese/ were not used when initializing BertModel: ['cls.predictions.transform.dense.bias', 'cls.predictions. transform.LayerNorm.weight', 'cls.predictions.transform.dense.weight', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.pre dictions.bias', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you...