GPT2-Chinese icon indicating copy to clipboard operation
GPT2-Chinese copied to clipboard

"generate.py" 报错

Open yangboz opened this issue 3 years ago • 0 comments

环境信息如下:

··· active environment : tf_gpu active env location : /home/server/anaconda3/envs/tf_gpu shell level : 2 user config file : /home/server/.condarc populated config files : /home/server/.condarc conda version : 4.9.2 conda-build version : 3.17.8 python version : 3.7.3.final.0 virtual packages : __cuda=11.2=0 __glibc=2.27=0 __unix=0=0 __archspec=1=x86_64 base environment : /home/server/anaconda3 (writable) channel URLs : https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/linux-64 https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/noarch https://mirrors.ustc.edu.cn/anaconda/pkgs/free/linux-64 https://mirrors.ustc.edu.cn/anaconda/pkgs/free/noarch https://repo.anaconda.com/pkgs/main/linux-64 https://repo.anaconda.com/pkgs/main/noarch https://repo.anaconda.com/pkgs/r/linux-64 https://repo.anaconda.com/pkgs/r/noarch package cache : /home/server/anaconda3/pkgs /home/server/.conda/pkgs envs directories : /home/server/anaconda3/envs /home/server/.conda/envs platform : linux-64 user-agent : conda/4.9.2 requests/2.21.0 CPython/3.7.3 Linux/5.4.0-65-generic ubuntu/18.04.5 glibc/2.27 UID:GID : 1000:1000 netrc file : None offline mode : False ···

运行信息如下: ··· python generate.py
--device 0
--length=50 --nsamples=4 --prefix=xxx --fast_pattern
--tokenizer_path prose/vocab.txt
--model_path prose/
--topp 1
--temperature 1.0
--batch_size 1 ··· 错误信息如下: ··· 2021-02-19 14:46:42.559905: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x564863feabc0 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices: 2021-02-19 14:46:42.559923: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX 960M, Compute Capability 5.0 args: Namespace(batch_size=1, device='0', fast_pattern=True, length=50, model_config='config/model_config_small.json', model_path='prose/', no_wordpiece=False, nsamples=4, prefix='xxx', repetition_penalty=1.0, save_samples=False, save_samples_path='.', segment=False, temperature=1.0, tokenizer_path='prose/vocab.txt', topk=8, topp=1.0) 0%| | 0/50 [00:00<?, ?it/s] Traceback (most recent call last): File "generate.py", line 222, in main() File "generate.py", line 182, in main out = generate( File "generate.py", line 117, in generate return fast_sample_sequence(model, context, length, temperature=temperature, top_k=top_k, top_p=top_p, File "generate.py", line 107, in fast_sample_sequence next_token = torch.multinomial(torch.softmax(filtered_logits, dim=-1), num_samples=1) RuntimeError: probability tensor contains either inf, nan or element < 0 ···

yangboz avatar Feb 19 '21 06:02 yangboz