Better support of openailike LLM tools
Add timeout parameter for better support of openailike LLM tools on local computer (like LM Studio). Reuse other parameters to improve settings of the OpenAILike object.
@jcbonnet-fwd could you please pull the changes from main? There are conflicts. Thanks!
@jcbonnet-fwd could you please pull the changes from main? There are conflicts. Thanks!
Is it OK now?
I think we could add
timeout=openai_settings.request_timeoutto openai LLM too, apart fro openailike
Sure, it makes sense. As there's also an "ollama_settings.request_timeout", it could be an idea to move the "request_timeout" parameter to the more generic "llm" yaml object (and add it to llamacpp, sagemaker, azopeonai, if available there). What do you think?