VAC_CSLR
VAC_CSLR copied to clipboard
实验复现结果不一致
作者您好,我们通过下载您的代码并对您提出的VAC进行了重跑了50个epoch(没有使用BN),结果最好只有35.1%的词错率。此外,我们调整代码中的权重,对baseline算法进行实验(不使用BN),发现结果也与论文中结果相差甚多,请问是否代码版本不一致,又或我们训练时间过短?
@wljcode Thanks for your attention to our work. It seems like you used batch size=1, which may affect the robustness of the model. Besides, the learning rate does not decay during training? Perhaps because there may exist some bugs in checkpoint loading, I will check this later.
Relevant logs are uploaded for comparson.
@wljcode Have you successfully reimplemented the experimental results? I checked the relevant logs and found that you adopt the load_weights to continue training rather than load_checkpoints, the former only load the model weights and the latter will load all training relevant parameters. It is expected to adopt load_checkpoints to continue training.
Thank you for your reply. Due to the limitation of GPU memory, we did not continue the experiment reproduction work recently. We will complete your work later when the equipment is ready !
Thank you for your reply. Due to the limitation of GPU memory, we did not continue the experiment reproduction work recently. We will complete your work later when the equipment is ready !
Hi, @ycmin95 , thanks for your great work. I try to reproduce the work recently. The final result is 0.4% worse than yours. Here is my training log. log.txt After 70th epoch, the performance cannot be improved as yours. Besides, I find "label_smoothing = 0.1" in your log, but not in the released code. Could you provide some advice?
Hi, @sunke123, thanks for your attention to our work. We will explain this performance gap in our next update, perhaps in two weeks, which can achieve better performance (about 20% WER) with fewer training epochs. You can conduct further experiments on this codebase, the update won't change the network structure and the training process.
The parameter label_smoothing is adopted in our early experiment about iterative training and I forget to delete this parameter, I will correct it in the next update.
@ycmin95 Cooooool! Thanks for your reply. Looking forward to that~
Hi, @sunke123, the code has been updated~
你好,请问我下载了代码重新训练了一下,但几轮下来DEV wer依旧是100%.我把batchsize调成1,lr调成0.000010.可以给点意见吗,谢谢.
@herochen7372 You can first check whether the evaluation script runs as expected with the provided pretrained model, and than check whether the loss decreases as the iteration progresses.
@ycmin95 Thanks for your reply.