text-embeddings-inference icon indicating copy to clipboard operation
text-embeddings-inference copied to clipboard

feat: Add an option for specifying model name

Open vrdn-23 opened this issue 4 months ago • 11 comments

What does this PR do?

This PR introduces a new CLI argument, --served-model-name, which allows users to specify a custom model name to be returned in responses from the OpenAI-compatible endpoint.

This is particularly useful in scenarios where the model is loaded from a local path (e.g., /data/model) and does not have an inherent name associated with it. By setting --served-model-name, users can override the default model identifier (which might be a generic or filesystem-based value) and provide a more descriptive or meaningful name in the API response. This helps improve clarity and consistency, especially when integrating with clients or tools that rely on the model field in the response for tracking or routing purposes.

Before submitting

  • [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • [ ] Did you read the contributor guideline, Pull Request section?
  • [ ] Was this discussed/approved via a Github issue or the forum? Please add a link to it if that's the case.
  • [ ] Did you make sure to update the documentation with your changes? Here are the documentation guidelines, and here are tips on formatting docstrings.
  • [ ] Did you write any new necessary tests?

Who can review?

@Narsil @alvarobartt @kozistr

I have tested this to the best of my ability, but I'm not sure if I did the gRPC bits correctly, so if someone could help verify that, that would be great!

vrdn-23 avatar Jul 24 '25 03:07 vrdn-23

@alvarobartt Let me know if you have are any changes/feedback for this PR

vrdn-23 avatar Aug 06 '25 15:08 vrdn-23

@alvarobartt @Narsil I was wondering if you have any thoughts/concerns/feedback regarding the use-case/implementation for this PR! Let me know if this is something you would like to discuss offline and I am open to that as well!

vrdn-23 avatar Aug 19 '25 19:08 vrdn-23

Hey @vrdn-23 thanks for opening this PR and apologies I'm just looking into it now! But I'll check that everything works this week, and happy to support it and add it within the next release 🤗

alvarobartt avatar Sep 15 '25 16:09 alvarobartt

Thanks @alvarobartt for getting back to me. let me know if there are any changes I need to make!

vrdn-23 avatar Sep 15 '25 20:09 vrdn-23

@alvarobartt just wanted to check in and see if this was still on your radar for review!

vrdn-23 avatar Oct 07 '25 20:10 vrdn-23

Hey @vrdn-23, yes! This is something I'd like to include for Text Embeddings Inference v1.9.0, but I'd like to first make sure that some patches land, apologies for the delay 🙏🏻

Also given that we add the --served-model-name, do you think it makes sense for us to validate that the provided model parameter on OpenAI Embeddings API requests i.e., v1/embeddings, matches the actual value of either --model-id or --served-model-name unless provided as empty, similarly to how other providers with OpenAI compatible interfaces do as e.g. vLLM?

alvarobartt avatar Oct 08 '25 14:10 alvarobartt

Also given that we add the --served-model-name, do you think it makes sense for us to validate that the provided model parameter on OpenAI Embeddings API requests i.e., v1/embeddings, matches the actual value of either --model-id or --served-model-name unless provided as empty, similarly to how other providers with OpenAI compatible interfaces do as e.g. vLLM

I think that would be great! I think by default if the model name isn't specified in vLLM, it still serves the request, so we shouldn't have to change compatibility of the OpenAI format spec. I can add a check for the validation based on the model name specified in --served-model-name

vrdn-23 avatar Oct 08 '25 17:10 vrdn-23

matches the actual value of either --model-id or --served-model-name unless provided as empty,

Just to add to this, it does seem vLLM does not check for matching with --model-id if --served-model-name is specified. So maybe we check for validation like this:

  • If --model-id is specified and --served-model-name is not, validate against model-id and return model-id in response
  • If --model-id is specified and --served-model-name is specified, validate against --served-model-name and return served-model-name in response
  • If model is passed in as empty, return the --served-model-name in response else --model-id (which is what the PR currently does?)

vrdn-23 avatar Oct 08 '25 17:10 vrdn-23

Yes that's right @vrdn-23, I'll validate and approve ✅

alvarobartt avatar Oct 08 '25 18:10 alvarobartt

@alvarobartt sorry this took so long, but I've added the validation we talked about and did some testing! Please do feel free to validate if everything works as expected!

vrdn-23 avatar Oct 29 '25 20:10 vrdn-23

Hey @vrdn-23 awesome thanks! I'll release a patch with some fixes this week, then start merging some PRs for v1.9.0 including this one 🤗

alvarobartt avatar Oct 30 '25 07:10 alvarobartt