transformers.js
transformers.js copied to clipboard
Onnx model fails to load: Protobuf parsing failed
System Info
When I use 3.4.0 release to load an onnx model from HuggingFace I get the following error. I do not see the same error with the 3.3.3 release.
Load model from <path_suffix>/node_modules/@huggingface/transformers/.cache/Xenova/bge-reranker-base/onnx/model.onnx failed:Protobuf parsing failed.
Environment/Platform
- [ ] Website/web-app
- [ ] Browser extension
- [ ] Server-side (e.g., Node.js, Deno, Bun)
- [ ] Desktop app (e.g., Electron)
- [ ] Other (e.g., VSCode extension)
Description
The onnx file is the same but a change between releases is causes the model to fail to load.
Reproduction
import { AutoModelForSequenceClassification, AutoTokenizer } from '@huggingface/transformers';
const RERANKER_NAME = "Xenova/bge-reranker-base"
tokenizer = await AutoTokenizer.from_pretrained(RERANKER_NAME);
model = await AutoModelForSequenceClassification.from_pretrained(RERANKER_NAME);
I encountered this issue when running npm with the --watch flag. Try cleaning the .cache folder under transformers and running npm without the --watch flag.
@CoderHam
Can you confirm the download finished successfully? It may be an incomplete file (check file size?)
I encountered this same problem, and the only way I fixed the issue is to replace the model.onnx file with the model.onnx file of sentence-transformers/paraphrase-MiniLM-L6-v2. I'm using the Xenova/paraphrase-MiniLM-L6-v2 model.
I tried using other models like the all-MiniLM-L6-v2, but I'm encountering the same issue. Interestingly, I only encounter this issue when running it inside a docker container (node:22-bullseye-slim) both as an AWS ECS task and locally. I used node:18-bullseye-slim before and upgraded, but I still had the same problem.
Here's the error log:
"message": "Error fetching news for chunk starting at index 0: Load model from /app/node_modules/@huggingface/transformers/.cache/Xenova/paraphrase-MiniLM-L6-v2/onnx/model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:165 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.\n"
Same experience with 'jina-reranker-v1-turbo-en'. Can confirm that 3.3.3 works as expected. Same model files that work w/ 3.3.3 fail under 3.4.x
Hey folks, same issue here, difficult to reproduce though as I hit it in a project... but not in my minimal repro, of course. I am using npm to install packages, and bun as the script runner on both.
The ".onnx" file downloaded seems to file as it is a small file (a few hundreds kb instead of 87mb for current version) but with a size varying when I delete the .cache folder in `node_modules/@huggingface/transformers.
I've noticed that this file is cached when I actually run the app, not during install, as explained within the Node.js tutorial.
I've pinpointed the fetch call to the src/utils/hub.js file but I don't get the error. It feels like an eagerly interrupted fetch call, after we got a 200 reponse for instance, but I lack enough logs to debug further.
As a sidenote : the @chroma-core/default-embed package seems to replace older chromadb-default-embed but the newer one doesn't have a link to GitHub. I had to double check the documentation to double check that this was not a scam package. It would be great to better document this transition.
我也遇见这个问题了,我发现文件/node_modules/@huggingface/transformers/.cache/Xenova/all-MiniLM-L6-v2/onnx/model.onnx刚开始不存在,如果你是刚运行的项目,文件一直未下载完成的,需要等待model.onnx文件下载完成才可以。
Same issue here today I'm using render.com to host
Load model from /opt/render/project/src/node_modules/@huggingface/transformers/.cache/sentence-transformers/all-MiniLM-L6-v2/onnx/model.onnx failed:Protobuf parsing failed.
I don't know if the download finished with npm install, but it should have dropped an error while installing, so we don't get this error at runtime
Can you confirm the download finished successfully? It may be an incomplete file (check file size?)
Thanks, my onnx files hadn't fully downloaded (in my case I was manually downloading the model). Fixed that and it's working now :).
Hey folks, would it be possible to escalate the issue to HuggingFace support somehow? In the sense that it could be a backend issue, and that a technical team from HuggingFace could have some information on what happens on the backend, as perhaps on some situation the server outputs a 200 too eagerly, with an empty file or almost empty? This would be a plausible explanation to this issue.
another similar issue today after deploying in prod..
Load model from /opt/render/project/src/node_modules/@huggingface/transformers/.cache/sentence-transformers/all-MiniLM-L6-v2/onnx/model.onnx failed:/onnxruntime_src/onnxruntime/core/graph/model.cc:165 onnxruntime::Model::Model(onnx::ModelProto&&, const PathString&, const IOnnxRuntimeOpSchemaRegistryList*, const onnxruntime::logging::Logger&, const onnxruntime::ModelOptions&) ModelProto does not have a graph.
Thanks all for your reports. Typically, this error suggests that the size of the [name].onnx file is 0, meaning the download was interrupted. Could you please inspect the size of the file to ensure the download was successful or not?
Thanks all for your reports. Typically, this error suggests that the size of the
[name].onnxfile is 0, meaning the download was interrupted. Could you please inspect the size of the file to ensure the download was successful or not?
Yes, size is zero. You are right. But how can we prevent that in the pipeline? Basically it should drop an error when we run npn install..
This should only happen at runtime (i.e., whenever the model is loaded for the first time), and not at npm install .... This may be a limitation of the hosting service? If so, I'd recommend pre-downloading the model to that directory to ensure file integrity.
Seems like an issue with the download, deleted the .cache folder in @huggingface/transformers and downloaded it again and seems to work now.