Lijiachen1018
Lijiachen1018
I found the same issue. A simple example: ```python > t = torch.Tensor([[[1,2,3,4],[5,6,7,8],[9,0,1,2]],[[2,2,3,4],[5,6,7,8],[9,0,1,2]]]) > t tensor([[[1., 2., 3., 4.], [5., 6., 7., 8.], [9., 0., 1., 2.]], [[2., 2., 3.,...
前一阵还看到一个小米的小爱语音纠错 -> [微信文章](https://mp.weixin.qq.com/s/JyXN9eukS-5XKvcJORTobg)
> 多谢分享 是不是应该拼音在前 笔画在后 确实反了,已经上传了新的😂
> fine-tuned/ 这目录下的文件有吗 可以分享否 [Google Drive](https://drive.google.com/file/d/1aI0BwgjYitm7ClH7gieu8m3Lxk3QYKQA/view?usp=sharing)
> 请问pytorch版本的NEZHA的预训练参数权重在哪里下载?目前没有找到链接 下载地址 [Google Drive](https://drive.google.com/file/d/1pmULarQ3UmsbctgPbYuL2tlG6H9n-0Kv/view?usp=sharing) 用 [NEZHA-base-WWM](https://github.com/huawei-noah/Pretrained-Language-Model/tree/master/NEZHA-TensorFlow#4-nezha-model-download) 从 tensorflow 的 checkpoint 转换的,用官方给的情感分类任务 [run_sequence_classifier.py](https://github.com/huawei-noah/Pretrained-Language-Model/blob/master/NEZHA-PyTorch/run_sequence_classifier.py) fine-tune 测了一下,应该没有问题
I 've modify the GraphAttentionLayer in [layers.py](https://github.com/Diego999/pyGAT/blob/3aa66135c7c326b6a06a58dea53ae62c03da58a3/layers.py#L7) and add some comment to show the changing of tensor dimension. Please let me know if there is mistake. ```python class GraphAttentionLayer(nn.Module): """...
同询问,因为pkuseg的词性标注体系与我想用的stanford corenlp体系不一样,想重新训练