guardrails icon indicating copy to clipboard operation
guardrails copied to clipboard

[bug] OpenAI v1.x AsyncClient Does Not Work in Guardrails as Expected

Open CalebCourier opened this issue 10 months ago • 0 comments

Describe the bug The only way to utilize Async clients for OpenAI 1.x in guardrails is to wrap it as a custom llm. This is because our identification criteria for OpenAI v1.x functions are weak and rely on the static exported methods.

To Reproduce

import openai
from guardrails import Guard


client = openai.AsyncClient()
guard = Guard().from_string(...) # doesn't matter

response = guard(
    client.chat.completions.create,
    ...
)

Expected behavior User should be able to use async clients in the same way as synchronous clients.

Library version: 0.4.2

Additional context There are ways to identify a function as belonging to an OpenAI 1.x client without necessitating they are one of the static methods, but it relies heavily on reflection:

from openai import AsyncClient, resources
async_client = AsyncClient()
llm_api = async_client.chat.completions.create
api_self = getattr(llm_api, "__self__", None)
llm_api_is_async_chat_completions = isinstance(api_self, resources.chat.completions.AsyncCompletions)
print("Llm Api Is Async Chat Completions: ", llm_api_is_async_chat_completions)

Additionally, the way we do things know (re-building the client), we would have to access private attributes:

api_self._client

Also see additional context here: https://github.com/guardrails-ai/guardrails/issues/681

CalebCourier avatar Mar 28 '24 14:03 CalebCourier