Documentation about default values of model paramaters
Feature request
In the documentation, there is not enough info about the default values TGI enforces if client request do not contain parameters like temperature, top_p, presence_frequency etc.
For e.g what would be the value setup by TGI if
temperature=nulltemperatureis not at all present in client request.
This would help users to adjust the client codebase when migrating to/from different serving frameworks .
As far I looked into code base I was unable to find a place where this is done.
Motivation
Documentation for defaults model parameters
Your contribution
I can create a PR if someone can point me to correct code base
Thank you @mohittalele for pointing this out!
You'll find the markdown of the docs in here: https://github.com/huggingface/text-generation-inference/tree/main/docs/source
Where you thinking of something similar that's documented for the CLI options, but for the client options. Or where you thinking about more clearly documenting what the server does if certain client values are passed?
These links might be useful:
-
There also this which kinda serves as a basis for the client documentation: https://github.com/huggingface/text-generation-inference/tree/main/clients/python#types
-
Note that there are two client libraries:
text_generationwhich is found in this repo and theInferenceClientfrom thehuggingface_hubhttps://huggingface.co/docs/huggingface_hub/guides/inference which probably increases confusion. Edit: just confirmed that thehuggingface_hubis the preferred one
Would love if you can take up getting a PR going, thanks a lot for bringing this up 🙌
@ErikKaum I am more looking for docs around - what the server does if certain client values are not passed?
For that I tried to find the actual call where the parameters are set.
So for e.g what value does TGI takes for temperature if user did not specify it in request - I am looking to fins out this in TGI codebase and then afterwords document it.
Okay sounds reasonable 👍 I'd say this is a good place to start: https://github.com/huggingface/text-generation-inference/blob/0b95693fb8b9640283a0fcf40ac4dd2ab15187eb/router/src/lib.rs#L733