paper-qa
paper-qa copied to clipboard
Using rate_limit functionality in CLI and Python raises KeyError: 'model_list'
I use the following CLI command from the README for incorporating a rate limit into my response
pqa --summary_llm_config '{"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}}' ask 'Are there nm scale features in thermoelectric materials?'
and get the following error:
File "[...]/llms.py", line 392, in maybe_set_config_attribute
and len({m["model_name"] for m in data["config"]["model_list"]}) > 1
~~~~~~~~~~~~~~^^^^^^^^^^^^^^
KeyError: 'model_list'
I also get the same error when trying to set a rate limit when using the package in a Python environment:
answer = await agent_query(
QueryRequest(
query = "What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm_config={"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}},
summary_llm_config={"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}},
),
)
)
Hey @jzqin! I'm here to help you with any bugs, questions, or contributions. Let's work together to solve this issue.
The KeyError: 'model_list' indicates that the model_list key is missing from the configuration data when using the rate limit functionality. The maybe_set_config_attribute function expects the model_list key to be present in the configuration data [1].
To resolve this issue, ensure that your configuration includes a model_list key. Here's how you can modify your CLI command and Python code to include the model_list:
For the CLI command, you can add a model_list to the summary_llm_config:
pqa --summary_llm_config '{"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}], "rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}}' ask 'Are there nm scale features in thermoelectric materials?'
For the Python environment, ensure that both llm_config and summary_llm_config include a model_list:
answer = await agent_query(
QueryRequest(
query="What manufacturing challenges are unique to bispecific antibodies?",
settings=Settings(
llm_config={
"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}],
"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}
},
summary_llm_config={
"model_list": [{"model_name": "gpt-4o-2024-08-06", "litellm_params": {"model": "gpt-4o-2024-08-06"}}],
"rate_limit": {"gpt-4o-2024-08-06": "30000 per 1 minute"}
},
),
)
)
This should prevent the KeyError by ensuring that the model_list key is present in your configuration.
To continue talking to Dosu, mention @dosu.