SoftMaskedBert-PyTorch icon indicating copy to clipboard operation
SoftMaskedBert-PyTorch copied to clipboard

🙈 An unofficial implementation of SoftMaskedBert based on huggingface/transformers.

Results 9 SoftMaskedBert-PyTorch issues
Sort by recently updated
recently updated
newest added

models.py 187-188行的代码,在loss并没有提升的情况下,还保存了模型,是否应该删除

(gitabtion) F:\0code\gitabtion>python main.py --mode preproc Namespace(accumulate_grad_batches=16, batch_size=16, bert_checkpoint='bert-base-chinese', device=device(type='cpu'), epochs=10, gpu_index=0, hard_device='cpu', load_checkpoint=False , loss_weight=0.8, lr=0.0001, mode='preproc', model_save_path='checkpoint', warmup_epochs=8) preprocessing... Traceback (most recent call last): File "main.py", line 99, in...

Some weights of the model checkpoint at bert-base-chinese were not used when initializing BertForMaskedLM: ['cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForMaskedLM from the checkpoint of a...

The detection result is precision=0.033734939759036145, recall=0.7636363636363637 and F1=0.06461538461538462 The correction result is precision=0.0, recall=0.0 and F1=0 Sentence Level: acc:0.000000, precision:0.000000, recall:0.000000, f1:0.000000 Traceback (most recent call last): File "main.py", line...

bug

softmaskedbert预测代码有么

enhancement

AttributeError: Can't pickle local object 'get_corrector_loader.._collate_fn' 运行训练集爆这个错误AttributeError: Can't pickle local object 'get_corrector_loader.._collate_fn' 有大佬帮忙解一下吗啾咪

如题,原论文中report的sighan15的 sentence-wise的 detect和correct分别是 73.5和66.4,描述使用的训练集也是sighan13-15的三个training set以及他们自己构建多达5million的 news title数据。你这边sentence一口气提到了79.4,还是仅用sighan得数据finetune,这个差距也太大了吧 有没有可能你的统计指标跟它不一样呢。。我个人怀疑你用的是sighan15的全量数据进行测试,即无错误的负样本也计入了 但实际上后面一系列的csc文章,基本都只用正样本进行测试的