XingWu_UCAS

Results 8 comments of XingWu_UCAS

看的不是很明白,能不能提供一下原讲义的章节?

想要加速的需要在生成过程中避免对归一化分母的计算

这里我推荐一个去年冯洋老师在将门的一期分享,讲的就是对NMT的训练改进和解码的加速。搬运一下录像和和PPT: #将门技术社群线上分享第176期# 中科院计算所副研究员冯洋:神经机器翻译的训练改进和解码提速 网盘>>https://pan.baidu.com/s/1py_RxX0RaF9AcA_L-fccWQ 提取码:gyym B站>>https://www.bilibili.com/video/av74001189/

@qizhex will you please help me ? Thank you

> I fixed my problem by modifying the tokenizer. The tokenizer UDA used is not consistent with BERT pretrain model for Chinese. > Before: > > ``` > def tokenize_to_wordpiece(self,...

Multi30k.splits has been updated, but your version is old. Replace it : `def splits(cls, exts, fields, root='.data', train='train', validation='val', test='test2016', **kwargs): """Create dataset objects for splits of the Multi30k dataset....

Will you please share me the dataset please? thank you.

在我的蒸馏的实验里,没有使用数据增强,然而加了hard label也并不能带来提升。。。