Can't use LiteLLM for ML backend because of pinned jsonschema and jinja2 dependencies
Describe the bug
Currently it's impossible to use LiteLLM as an ML backend because Label Studio has jsonschema==3.2.0 and Jinja2==3.0.3 pinned, while LiteLLM 1.41.21 needs newer versions, but specifies a range so would be possible to reconcile.
5.061 The conflict is caused by:
5.061 label-studio-sdk 1.0.4 depends on jsonschema==3.2.0
5.061 litellm 1.41.21 depends on jsonschema<5.0.0 and >=4.22.0
6.340 The conflict is caused by:
6.340 label-studio-ml 1.0.9 depends on Jinja2==3.0.3
6.340 mlflow 2.14.3 depends on Jinja2<4 and >=2.11; platform_system != "Windows"
6.340 spacy 3.7.5 depends on jinja2
6.340 litellm 1.41.21 depends on jinja2<4.0.0 and >=3.1.2
Now I realise that bumping a major version of jsonschema might be a bigger job if there's a breaking change, but I went through LiteLLM releases and the relatively recent 1.41.1 doesn't have jsonschema as a dependency, so could temporarily live with that.
In other words, is there any chance you could change the version to Jinja2>=3.0.3 unless you're very certain something breaks in the version after?
And if possible see if you could upgrade jsonschema too? There was a security fix in v4.18.0 in particular and 3.2.0 is almost 5 years old.
To Reproduce Steps to reproduce the behavior:
- Create a
requirements.txtwith the following contents:
litellm==1.41.21
label-studio-ml==1.0.9
label-studio-sdk==1.0.4
- Try to install with
pip install -r requirements.txt
Expected behavior The packages can be installed.
Environment (please complete the following information):
- OS: Mac OS 14.5 and Docker
python:3.11.9-bookworm - Label Studio Version ml==1.0.9 sdk==1.0.4
To the question of dependencies incompatibilities in general, would it be possible to use a separate VM for LiteLLM instead of installing it in the same environment as Label Studio? Great callout about the jsonschema dependency being several years old, we'll see if we can update it.
To the question of dependencies incompatibilities in general, would it be possible to use a separate VM for LiteLLM instead of installing it in the same environment as Label Studio?
First of all thanks for picking this up!
We're using LiteLLM not as a proxy but as a library which is much simpler and it's in a fair bit of the rest of the codebase so would take a bit of work to change throughout.
The easiest workaround we found in the meanwhile was to copy the minimum amount of the code from the label-studio-ml library (like LabelStudioMLBase) into a file in our shared library and then we can run the LLM based annotation prediction with LiteLLM. It's a bit messy and definitely not great for maintainability though 😅