garak
garak copied to clipboard
relax litellm provider constraint
Fixing: https://github.com/leondz/garak/issues/755
- Remove provider constraint on litellm generator
- Fixes breaking garak test
test_litellm#test_litellm_openai - Exceptions on model non-existence raised by litellm bubbled up
DCO Assistant Lite bot All contributors have signed the DCO ✍️ ✅
test_litellm#test_litellm_openaipasses on this branch but fails on main
~/garak$ git branch
* bugfix/litellm_provider_validation
main
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item
tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
test passed!
.
======================================================== 1 passed in 3.98s ========================================================
- Fails on main:
~/garak$ python -m pytest tests/generators/test_litellm.py::test_litellm_openai -s
======================================================= test session starts =======================================================
platform linux -- Python 3.12.3, pytest-8.3.2, pluggy-1.5.0
rootdir: /home/arjun/garak
configfile: pyproject.toml
plugins: requests-mock-1.12.1, anyio-4.4.0, respx-0.21.1
collected 1 item
tests/generators/test_litellm.py 🦜 loading generator: LiteLLM: gpt-3.5-turbo
F
============================================================ FAILURES =============================================================
_______________________________________________________ test_litellm_openai _______________________________________________________
@pytest.mark.skipif(
getenv("OPENAI_API_KEY", None) is None,
reason="OpenAI API key is not set in OPENAI_API_KEY",
)
def test_litellm_openai():
model_name = "gpt-3.5-turbo"
> generator = LiteLLMGenerator(name=model_name)
tests/generators/test_litellm.py:16:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <garak.generators.litellm.LiteLLMGenerator object at 0x7d1108f95d30>, name = 'gpt-3.5-turbo', generations = 10
config_root = <module 'garak._config' from '/home/arjun/garak/garak/_config.py'>
def __init__(self, name: str = "", generations: int = 10, config_root=_config):
self.name = name
self.api_base = None
self.api_key = None
self.provider = None
self.key_env_var = self.ENV_VAR
self.generations = generations
self._load_config(config_root)
self.fullname = f"LiteLLM {self.name}"
self.supports_multiple_generations = not any(
self.name.startswith(provider)
for provider in unsupported_multiple_gen_providers
)
super().__init__(
self.name, generations=self.generations, config_root=config_root
)
if self.provider is None:
> raise ValueError(
"litellm generator needs to have a provider value configured - see docs"
E ValueError: litellm generator needs to have a provider value configured - see docs
garak/generators/litellm.py:129: ValueError
===================================================== short test summary info =====================================================
FAILED tests/generators/test_litellm.py::test_litellm_openai - ValueError: litellm generator needs to have a provider value configured - see docs
======================================================== 1 failed in 1.06s ========================================================
Exception on non-existence raised by litellm: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=non-existent-model
>>> from garak.generators.litellm import LiteLLMGenerator
>>> non_existent_model = "non-existent-model"
>>> generator = LiteLLMGenerator(name=non_existent_model)
🦜 loading generator: LiteLLM: non-existent-model
>>> generator.generate("This should raise an exception!")
Provider List: https://docs.litellm.ai/docs/providers
INFO:backoff:Backing off _call_model(...) for 0.0s (litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=non-existent-model
Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers)
I have read the DCO Document and I hereby sign the DCO
Thank you, will take a look
@arjun-krishna1, please follow the fine print at the end of the bot's DCO comment to trigger action again.
recheck
Thanks for the review @jmartin-tech Updated PR with try block and only backing off on litellm APIError Have also added a test that checks that a Bad Generator exception is raised if litellm is given a bad model name
To complete this constraint removal, the code needs to fully support passing model type detection on to
litellm. This means removing the class levelENV_VARand raising for specific errors that are thrown whenlitellmcannot determine the target API client to utilize.
Hi @jmartin-tech , I think I've resolved all your comments so far Please let me know if you have any other feedback or if this is good to go!
Thank you @arjun-krishna1 !!