GPT2-Chinese
GPT2-Chinese copied to clipboard
预训练模型的名字什么鬼
ile "d:\Bert_pre\GPT_2\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\GPT2-Chinese-old_gpt_2_chinese_before_2021_4_22\tokenizations\tokenization_bert.py", line 131, in init
"model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)".format(vocab_file))
ValueError: Can't find a vocabulary file at path 'cache/vocab_small.txt'. To load the vocabulary from a Google pretrained model use tokenizer = BertTokenizer.from_pretrained(PRETRAINED_MODEL_NAME)
这个报错是什么
这看上去很简单的一个报错,BertTokenizer.from_pretrained 就是实例化一个tokenizer这一行报错了。
不过看上去去 不应该报错呀,你gpt琢磨一下估计自己就可以找到原因了,哈哈^-^