litellm icon indicating copy to clipboard operation
litellm copied to clipboard

fix: apply api_key param to amoderation factory

Open karter-liner opened this issue 11 months ago • 3 comments

Apply api_key param to amoderation factory

Type

🐛 Bug Fix

Changes

    | Traceback (most recent call last):
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 188, in async_wrapper
    |     self._handle_exception(observation, e)
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 428, in _handle_exception
    |     raise e
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/langfuse/decorators/langfuse_decorator.py", line 186, in async_wrapper
    |     result = await func(*args, **kwargs)
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/src/external/moderation.py", line 23, in moderation
    |     response = await router.amoderation(
    |                ^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/router.py", line 2593, in new_function
    |     return await self._pass_through_moderation_endpoint_factory(  # type: ignore
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/router.py", line 2572, in _pass_through_moderation_endpoint_factory
    |     return await original_function(**kwargs)
    |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1175, in wrapper_async
    |     raise e
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1031, in wrapper_async
    |     result = await original_function(*args, **kwargs)
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/main.py", line 4353, in amoderation
    |     _openai_client: AsyncOpenAI = openai_chat_completions._get_openai_client(  # type: ignore
    |                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/litellm/llms/OpenAI/openai.py", line 570, in _get_openai_client
    |     _new_client: Union[OpenAI, AsyncOpenAI] = AsyncOpenAI(
    |                                               ^^^^^^^^^^^^
    |   File "/Users/karter/Desktop/lisa/.venv/lib/python3.12/site-packages/openai/_client.py", line 319, in __init__
    |     raise OpenAIError(
    | openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
    +------------------------------------
  • after litellm started usting factory_function for amoderation api, it keep returning above error exception

image

  • but this error shouldn't happen if we use api_key properly
  • many projects are using multiple api keys for moderation api to mitigate rate limit issue
  • therefore we should not use global single api key for openai, but should use registered api key for model
  • i fixed this issue

[REQUIRED] Testing - Attach a screenshot of any new tests passing locall

karter-liner avatar Dec 05 '24 20:12 karter-liner

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Dec 5, 2024 8:14pm

vercel[bot] avatar Dec 05 '24 20:12 vercel[bot]

@karter-liner can I see code to repro the issue you were seeing ?

ishaan-jaff avatar Dec 07 '24 22:12 ishaan-jaff

@ishaan-jaff

import os
import asyncio
from litellm import Router


# if you put os env key, it works
# os.environ["OPENAI_API_KEY"] = "api_key_1"


def init_router():
    openai_api_keys = ["api_key_1", "api_key_2"]
    model_list = []
    for openai_key in openai_api_keys:
        model_list.append(
            {
                "model_name": "text-moderation-latest",
                "litellm_params": {
                    "model": "text-moderation-latest",
                    "api_key": openai_key,
                },
            }
        )

    return Router(
        model_list=model_list,
    )

async def main():
    router = init_router()
    stream = await router.amoderation(
        model="text-moderation-latest",
        input="hi, who are you?",
    )
    # it returns openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
    print(stream)


if __name__ == "__main__":
    asyncio.run(main())

karter-liner avatar Dec 08 '24 21:12 karter-liner

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

CLAassistant avatar Apr 22 '25 22:04 CLAassistant

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

CLAassistant avatar Apr 22 '25 22:04 CLAassistant

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

CLAassistant avatar Apr 22 '25 22:04 CLAassistant