onnxruntime icon indicating copy to clipboard operation
onnxruntime copied to clipboard

onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model) onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from onnx_data/cpm_large_opt.onnx failed:Protobuf parsing failed.

Open lucasjinreal opened this issue 2 years ago • 14 comments

onnxruntime/capi/onnxruntime_inference_collection.py", line 370, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from onnx_data/cpm_large_opt.onnx failed:Protobuf parsing failed.
image

A folder contains an very big ONNX model about 7 GB, with it's onnx and external data. Onnxruntime not able to load this model back.

lucasjinreal avatar Mar 16 '22 13:03 lucasjinreal

To debug the problem, you may use google protobuf python API to load the model directly. If it fails, then something was wrong in the model. Else, it's a bug of onnxruntime.

Document: https://developers.google.com/protocol-buffers/docs/pythontutorial

Protobuf def file: https://github.com/onnx/onnx/blob/main/onnx/onnx-ml.proto . The data type you will need to use is: ModelProto.

onnx_data/cpm_large_opt.onnx can't be bigger than 2GB, but the folder can.

snnn avatar Mar 16 '22 20:03 snnn

@snnn thanks. the onnx model is just 174kb. is that normal? the model is just exported via Pytorch, and I can using neuron visualize it and I can using onnx read the model successfully.

image image

actually, what am doing, is using onnxruntime transformer optimization API to optimize this huge model. Seems it's onnxruntime problem.

lucasjinreal avatar Mar 17 '22 03:03 lucasjinreal

is that normal?

It is. The error was thrown from protobuf API. So I was guessing it should be reproducible without using ONNX or ONNX Runtime.

snnn avatar Mar 17 '22 03:03 snnn

@snnn that would be strange. As I shown above, I can read the onnx normal and prints it's information. Shouldn't be protobuf problem.

I think the reason maybe onnx just read the onnx model itself without huge data external, while ort reads all of them, and somehow broken. Would u help me take a deep investigate on model more than 7GB on ort?

lucasjinreal avatar Mar 17 '22 07:03 lucasjinreal

Like, sometimes the other tools read/write text-format models but ONNX spec requires models in binary format.

snnn avatar Mar 17 '22 17:03 snnn

The only places that can produce this error message are in https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/core/graph/model.cc

If you search "Protobuf parsing failed", there are only 3 places. They all because protobuf returned an error.

snnn avatar Mar 17 '22 17:03 snnn

I also meet some problem, is there a solution, thanks ~

lileilai avatar Apr 06 '22 08:04 lileilai

Hello her Ahmed, is have smaes broplem.

Penghmed avatar Sep 19 '22 12:09 Penghmed

The same problem with big model

YarrDOpanas avatar Mar 16 '23 12:03 YarrDOpanas

The same problem with options.ExecutionMode = ExecutionMode.ORT_PARALLEL; config. And model with 200 mb max.

vitao-coder avatar Apr 04 '23 19:04 vitao-coder

This error may occur if there's smth wrong with the model file inswapper_128.onnx

Try to download it manually from here and put it to the stable-diffusion-webui\models\insightface replacing existing one

ABZig avatar Nov 20 '23 11:11 ABZig

This error may occur if there's smth wrong with the model file inswapper_128.onnx

Try to download it manually from here and put it to the stable-diffusion-webui\models\insightface replacing existing one

Thanks man, this fixed my issue

cupofwater1 avatar Dec 17 '23 18:12 cupofwater1

如果有人是因为 ComfyuiInstantID 插件报错而来,那可能是因为你没有下载对应的模型或者没有将相应的模型放到插件指定的位置。

  1. 只需要将后面链接中 https://huggingface.co/DIAMONIK7777/antelopev2/tree/main所有的模型放到指定的位置 ComfyUI//custom_nodes/ComfyUI-InstantID/models/antelopev2

xuewuzhijin avatar Apr 19 '24 06:04 xuewuzhijin

@snnn that would be strange. As I shown above, I can read the onnx normal and prints it's information. Shouldn't be protobuf problem.

I think the reason maybe onnx just read the onnx model itself without huge data external, while ort reads all of them, and somehow broken. Would u help me take a deep investigate on model more than 7GB on ort?

I just tried. Current workaround might be save the model with save_as_external_data=True, then load with onnxruntime.InferenceSession(), but you have to make sure the onnx modelproto is checked.

Lewis-Lu avatar May 17 '24 07:05 Lewis-Lu