PaddleNLP
PaddleNLP copied to clipboard
运行 infer_cpu.py 报错 built-in method enable_padding of Tokenizer object at 0x00000219C9F57130
请提出你的问题
运行PaddleNlp 下的uie模型目录中的 infer_cpu.py 报错 built-in method enable_padding of Tokenizer object at 0x00000219C9F57130
G:\soft\Anaconda3\envs\my_paddlenlp\python.exe G:/myself/future_pro/nlp/PaddleNLP/model_zoo/uie/deploy/python/infer_cpu.py [2022-10-14 22:48:14,265] [ INFO] - We are using <class 'paddlenlp.transformers.ernie.faster_tokenizer.ErnieFasterTokenizer'> to load 'ernie-3.0-base-zh'. [2022-10-14 22:48:14,267] [ INFO] - Already cached C:\Users\dataexa.paddlenlp\models\ernie-3.0-base-zh\ernie_3.0_base_zh_vocab.txt
[InferBackend] Creating Engine ... [Paddle2ONNX] Start to parse PaddlePaddle model... [Paddle2ONNX] Model file path: G:\myself\future_pro\nlp\PaddleNLP\model_zoo\uie\checkpoint\model_600\static\inference.pdmodel [Paddle2ONNX] Paramters file path: G:\myself\future_pro\nlp\PaddleNLP\model_zoo\uie\checkpoint\model_600\static\inference.pdiparams [Paddle2ONNX] Start to parsing Paddle model... [Paddle2ONNX] Use opset_version = 13 for ONNX export. [Paddle2ONNX] PaddlePaddle model is exported as ONNX format now. [InferBackend] Use CPU to inference ... TypeError: 'pad_token_type_id' is an invalid keyword argument for this function
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "G:/myself/future_pro/nlp/PaddleNLP/model_zoo/uie/deploy/python/infer_cpu.py", line 86, in
uie_predictor.py 中 下面这段修改成 self._tokenizer = AutoTokenizer.from_pretrained( "ernie-3.0-base-zh")
Glad to see that you have found the solution which not use faster tokenizer.
This issue is stale because it has been open for 60 days with no activity. 当前issue 60天内无活动,被标记为stale。