OpenHands
OpenHands copied to clipboard
(feat) Introduce Router (litellm) into LLM and LLMConfig classes
Short description of the problem this fixes or functionality that this introduces. This may be used for the CHANGELOG
Introduce Router and configuration (litellm) into LLM and LLMConfig classes. Resolves #4056
Give a summary of what the PR does, explaining any non-trivial design decisions
- Initial PR to add configuration and use of litellm's
Router
class - In LLM class regular completion (as before), only use router if router config incl. models exist
- Added extensive example router configuration to
config.template.toml
with multiple models - The
load_from_toml
method loads sectionrouter_config
as a top-tier section into apps'llms["llm"]
The router's configuration is complex as it is, hanging it in toml syntax under mainllm
in the toml file would make this even more complex and potentially unreadable, due to nesting, imho - Added new
test_llm_router.py
unit test file with router-related tests
Other thoughts
- I see this as an add-on, not as a replacement for the existing completion logic, as it requires a config.toml file and a rather complex configuration.
- Its only used if such configuration exists in the config.toml file. Otherwise the regular completion mechanic is used as before.
- The router does retries internally, the specifics need to be looked up in the docs.
- From their docs:
For RateLimitError we implement exponential backoffs
Link of any specific issues this addresses