MLServer
MLServer copied to clipboard
[Protobuf] TypeError: Descriptors cannot not be created directly.
We get this error on startup of the inference server with mlserver==1.0.1 so that the server is completely blocked from working on requests.
Traceback (most recent call last):
File "/usr/local/bin/mlserver", line 5, in <module>
from mlserver.cli import main
File "/usr/local/lib/python3.8/site-packages/mlserver/__init__.py", line 2, in <module>
from .server import MLServer
File "/usr/local/lib/python3.8/site-packages/mlserver/server.py", line 15, in <module>
from .grpc import GRPCServer
File "/usr/local/lib/python3.8/site-packages/mlserver/grpc/__init__.py", line 1, in <module>
from .server import GRPCServer
File "/usr/local/lib/python3.8/site-packages/mlserver/grpc/server.py", line 8, in <module>
from .servicers import InferenceServicer, ModelRepositoryServicer
File "/usr/local/lib/python3.8/site-packages/mlserver/grpc/servicers.py", line 5, in <module>
from . import dataplane_pb2 as pb
File "/usr/local/lib/python3.8/site-packages/mlserver/grpc/dataplane_pb2.py", line 25, in <module>
_SERVERLIVEREQUEST = _descriptor.Descriptor(
File "/usr/local/lib/python3.8/site-packages/google/protobuf/descriptor.py", line 313, in __new__
_message.Message._CheckCalledFromGeneratedFile()
TypeError: Descriptors cannot not be created directly.
If this call came from a _pb2.py file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.
If you cannot immediately regenerate your protos, some other possible workarounds are:
1. Downgrade the protobuf package to 3.20.x or lower.
2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).
More information: https://developers.google.com/protocol-buffers/docs/news/2022-05-06#python-updates
Hey @jakob-ed ,
Is this from a custom Docker image built with mlserver build .? From the looks of it, it seems there may be two versions of MLServer clashing between them.
Also, could you try with mlserver==1.1.0.dev6 instead? This is our latest nightly build.
Hi, thanks for getting back so quickly! We are running MLServer in a docker image but we don't use mlserver build .. I don't see a way how there could be multiple versions of MLServer clashing with each other in our setup. We only have mlserver and another dependency (which doesn't depend on mlserver itself). Dockerfile is also very basic.
However, everything works fine with mlserver==1.1.0.dev6.
Hey @jakob-ed ,
After having a deeper look into this one, I can confirm that we can replicate the same on our end. That is, running the mlserver CLI directly when installing mlserver==1.0.1, ends on the error above.
The grpcio package is not bounded atm, and that seems to have caused this issue. Therefore, a temporary workaround would be to also install grpcio==3.20.1, as in:
pip install mlserver==1.0.1 grpcio==3.20.1
Given that we should be releasing MLServer 1.1.0 next week (hopefully :crossed_fingers:), and that 1.1.0 fixes this issue, would it still make sense to release a 1.0.2 fix of the above?
Thanks for the tip! We can wait for 1.1.0.
Updating to 1.1.0 didn't solve this problem for us. The error still occurs at inference time.
The solution was to add a version guard protobuf<4.0.0
For anyone who is looking for a proper solution.
https://stackoverflow.com/a/72493690/6194097
Hey everyone,
Following up on this one, we haven't been able to replicate any issues with mlserver == 1.1.0.
@illeatmyhat would you be able to share the output of pip freeze?
@illeatmyhat would you be able to share the output of
pip freeze?
Sure. This is after applying protobuf<4.0.0 though.
absl-py==1.1.0
aiokafka==0.7.2
anyio==3.6.1
asgiref==3.5.2
astunparse==1.6.3
blis==0.7.8
cachetools==5.2.0
catalogue==2.0.7
certifi==2022.6.15
charset-normalizer==2.0.12
click==8.1.3
cycler==0.11.0
cymem==2.0.6
fastapi==0.78.0
filelock==3.7.1
flake8==4.0.1
flatbuffers==1.12
fonttools==4.33.3
gast==0.4.0
google-auth==2.8.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.46.3
gunicorn==20.1.0
h11==0.13.0
h5py==3.7.0
huggingface-hub==0.8.1
idna==3.3
importlib-metadata==4.11.4
Jinja2==3.1.2
joblib==1.1.0
kafka-python==2.0.2
keras==2.9.0
Keras-Preprocessing==1.1.2
kiwisolver==1.4.3
langcodes==3.3.0
lexicalrichness==0.1.9
libclang==14.0.1
Markdown==3.3.7
MarkupSafe==2.1.1
matplotlib==3.5.2
mccabe==0.6.1
mlserver==1.1.0
murmurhash==1.0.7
nltk==3.7
numpy==1.22.4
oauthlib==3.2.0
opt-einsum==3.3.0
packaging==21.3
pandas==1.4.2
pathy==0.6.1
Pillow==9.1.1
preshed==3.0.6
prometheus-client==0.14.1
protobuf==3.20.1
py-grpc-prometheus==0.7.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycodestyle==2.8.0
pydantic==1.8.2
pyflakes==2.4.0
pyparsing==3.0.9
python-dateutil==2.8.2
python-dotenv==0.20.0
pytz==2022.1
PyYAML==6.0
regex==2022.6.2
requests==2.28.0
requests-oauthlib==1.3.1
rouge-score==0.0.4
rsa==4.8
scikit-learn==1.1.1
scipy==1.8.1
sentencepiece==0.1.96
six==1.16.0
smart-open==5.2.1
sniffio==1.2.0
spacy==3.3.1
spacy-legacy==3.0.9
spacy-loggers==1.0.2
srsly==2.4.3
starlette==0.19.1
starlette-exporter==0.13.0
tensorboard==2.9.1
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.9.1
tensorflow-estimator==2.9.0
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.26.0
termcolor==1.1.0
textblob==0.17.1
thinc==8.0.17
threadpoolctl==3.1.0
tokenizers==0.12.1
torch==1.11.0
tqdm==4.64.0
transformers==4.20.1
typer==0.4.1
typing-extensions==4.2.0
urllib3==1.26.9
uvicorn==0.17.6
uvloop==0.16.0
wasabi==0.9.1
Werkzeug==2.1.2
wrapt==1.14.1
zipp==3.8.0
Thanks for sharing those @illeatmyhat .
That's strange. That set of deps is not too different from what I'm trying with protobuf==4.21.2, and I'm unable to replicate on my end.
I'm wondering whether this was just due to a particular version within the 4.x branch of protobuf, which may have now been fixed. Would you be able to install protobuf==4.21.2 to see if you can still see the same issue?
Version of protobuf should now be pinned more tightly (plus, there are extra tests covering this). So hopefully this shouldn't be an issue anymore.