桐原因
桐原因
train.py ``` from __future__ import division from __future__ import print_function from sklearn import metrics import time import sys import os import torch import torch.nn as nn import numpy as np...
build_graph.py ``` """ @file: 建立图 """ import random import numpy as np import pickle as pkl import scipy.sparse as sp from math import log from nltk.corpus import wordnet as wn...
希望大佬有空帮忙看下,十分感谢!
train的数据格式为: ``` id,content,label ``` test的数据格式为: ``` id,content ```
大佬,我尝试把ernie3.0转成torch版本,skip了 task type embedding,因为我看了ErnieModel源码中use_task_id默认是false,一般情况下,忽略掉 task type embedding,应该问题不大, ``` # torch import torch from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('./ernie-3.0-base-zh-torch') model = BertModel.from_pretrained('./ernie-3.0-base-zh-torch') input_ids = torch.tensor([tokenizer.encode(text="你好",add_special_tokens=True)]) with torch.no_grad(): pooled_output =...
> 大佬,我尝试把ernie3.0转成torch版本,skip了 task type embedding,因为我看了ErnieModel源码中use_task_id默认是false,一般情况下,忽略掉 task type embedding,应该问题不大, > > ``` > # torch > import torch > from transformers import BertTokenizer, BertModel > tokenizer = BertTokenizer.from_pretrained('./ernie-3.0-base-zh-torch') > model =...
> > 大佬,我尝试把ernie3.0转成torch版本,skip了 task type embedding,因为我看了ErnieModel源码中use_task_id默认是false,一般情况下,忽略掉 task type embedding,应该问题不大, > > ``` > > # torch > > import torch > > from transformers import BertTokenizer, BertModel > > tokenizer...
> 犯了一个错误,评估时dropout应该不会随机丢弃,之前的代码有点问题,需要加上model.eval(),表明在评估阶段。这样结果差异就很小了 torch pooled output: ``` [[ 9.85730708e-01 -7.40298808e-01 3.95261258e-01 -7.59342790e-01 8.96910310e-01 8.82966697e-01 -6.58721209e-01 -4.71505731e-01 -9.71126974e-01 -9.74366426e-01 -1.87828429e-02 4.24025029e-01 -5.76551020e-01 -7.90736675e-01 -9.77571666e-01 8.17567468e-01 6.43071532e-01 -4.70006205e-02 3.44053745e-01 8.76602650e-01 -3.87427926e-01 1.63349375e-01 -5.80719292e-01...
将task type embedding也加上了 torch pooled output:+ task type embedding ``` [[ 0.988166 -0.8854939 0.25455064 -0.58845514 0.93798053 0.8004532 -0.89700645 0.09135557 -0.9623787 -0.9367434 -0.5948328 0.21737 -0.85140526 -0.8041696 -0.97480065 0.68086064 0.2209907 0.26748967 -0.1568218...
> @yysirs 👍👍👍 欢迎整理一下提交一个mr,感谢! 好的😁