h2ogpt
h2ogpt copied to clipboard
question regarding model_lock
Hello all, I wonder if someone can tell me, when using model_lock to deploy mutliple inferencing gradio services, can I specifiy different LLM control parameters (temperature, top p, top N, etc.) at the CLI command (either the server or the client command)? thanks!