Vimos Tan
Vimos Tan
Same issue with version `1.0.181` on `Windows 22H2 22621.232`.
I am also getting this error and I am using ``` In [1]: import theano the Using cuDNN version 7103 on context None Mapped name None to device cuda: GeForce...
Is the memory increasing in your case? Mine runs out of memory in the middle of training. ``` [20180625-174613] Epoch 0 74.2%, loss_p1: 3.338, loss_p2: 2.325 p1 acc: 9.000% (6077/65000),...
@wlhgtc Thanks for the advice. If I keep using the default value for length, I have to change to a smaller batch size of 10. This still require `7709MiB` memory.
这个不能用`transformers`的albert加载吧,即便是用`BertTokenizer`代替,也会报下面的错误 ``` RuntimeError: Error(s) in loading state_dict for AlbertForClozeExtra: size mismatch for bert.embeddings.position_embeddings.weight: copying a param with shape torch.Size([512, 2048]) from checkpoint, the shape in current model is torch.Size([512, 128])....
可能属于重复问题,参考 https://github.com/brightmart/albert_zh/issues/17
It's the same with original `pytorch-transformers` ``` bert_config = MODEL_CLASSES[args.model_type][0].from_json_file(args.bert_config_file) tokenizer = MODEL_CLASSES[args.model_type][2](vocab_file=args.vocab_file, do_lower_case=args.do_lower_case) model = MODEL_CLASSES[args.model_type][1].from_pretrained(args.model_name_or_path, from_tf=bool('.ckpt' in args.model_name_or_path), config=bert_config) ``` However, for `RobertaTokenizer`, you cannot use it to...
Similar issue with directly pickling the doc. ``` In [1]: import spacy In [2]: import neuralcoref In [3]: nlp = spacy.load('en_core_web_sm') In [4]: neuralcoref.add_to_pipe(nlp) Out[4]: In [5]: d = nlp("NeuralCoref...
在node中使用的
我的意思是[1] 当然,这也要看项目的目标了,仅供参考。 给出正确的拼音,标准是标准汉语,[1]更合理; 给出正确的读音,标准是日常习惯,变调更合理,但它附加了规则。