text-embeddings-inference
text-embeddings-inference copied to clipboard
feat: Add an option for specifying model name
What does this PR do?
This PR introduces a new CLI argument, --served-model-name, which allows users to specify a custom model name to be returned in responses from the OpenAI-compatible endpoint.
This is particularly useful in scenarios where the model is loaded from a local path (e.g., /data/model) and does not have an inherent name associated with it. By setting --served-model-name, users can override the default model identifier (which might be a generic or filesystem-based value) and provide a more descriptive or meaningful name in the API response. This helps improve clarity and consistency, especially when integrating with clients or tools that rely on the model field in the response for tracking or routing purposes.
Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the contributor guideline, Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
- [ ] Did you write any new necessary tests?
Who can review?
@Narsil @alvarobartt @kozistr
I have tested this to the best of my ability, but I'm not sure if I did the gRPC bits correctly, so if someone could help verify that, that would be great!
@alvarobartt Let me know if you have are any changes/feedback for this PR
@alvarobartt @Narsil I was wondering if you have any thoughts/concerns/feedback regarding the use-case/implementation for this PR! Let me know if this is something you would like to discuss offline and I am open to that as well!
Hey @vrdn-23 thanks for opening this PR and apologies I'm just looking into it now! But I'll check that everything works this week, and happy to support it and add it within the next release 🤗
Thanks @alvarobartt for getting back to me. let me know if there are any changes I need to make!
@alvarobartt just wanted to check in and see if this was still on your radar for review!
Hey @vrdn-23, yes! This is something I'd like to include for Text Embeddings Inference v1.9.0, but I'd like to first make sure that some patches land, apologies for the delay 🙏🏻
Also given that we add the --served-model-name, do you think it makes sense for us to validate that the provided model parameter on OpenAI Embeddings API requests i.e., v1/embeddings, matches the actual value of either --model-id or --served-model-name unless provided as empty, similarly to how other providers with OpenAI compatible interfaces do as e.g. vLLM?
Also given that we add the --served-model-name, do you think it makes sense for us to validate that the provided model parameter on OpenAI Embeddings API requests i.e., v1/embeddings, matches the actual value of either --model-id or --served-model-name unless provided as empty, similarly to how other providers with OpenAI compatible interfaces do as e.g. vLLM
I think that would be great! I think by default if the model name isn't specified in vLLM, it still serves the request, so we shouldn't have to change compatibility of the OpenAI format spec. I can add a check for the validation based on the model name specified in --served-model-name
matches the actual value of either --model-id or --served-model-name unless provided as empty,
Just to add to this, it does seem vLLM does not check for matching with --model-id if --served-model-name is specified. So maybe we check for validation like this:
- If
--model-idis specified and--served-model-nameis not, validate againstmodel-idand returnmodel-idin response - If
--model-idis specified and--served-model-nameis specified, validate against--served-model-nameand returnserved-model-namein response - If
modelis passed in as empty, return the--served-model-namein response else--model-id(which is what the PR currently does?)
Yes that's right @vrdn-23, I'll validate and approve ✅
@alvarobartt sorry this took so long, but I've added the validation we talked about and did some testing! Please do feel free to validate if everything works as expected!
Hey @vrdn-23 awesome thanks! I'll release a patch with some fixes this week, then start merging some PRs for v1.9.0 including this one 🤗