guardrails
guardrails copied to clipboard
Guardrails support of AzureOpenAI with openai>1.0.0[bug]
Describe the bug Does Guardrails-ai support AzureOpenAI with openai library>1.0 which has a different llm call api from openai==0.28? (openai.chat.completions.create instead of openai.ChatCompletion.create)
To Reproduce openai.api_key = api_key openai.azure_endpoint =azure_endpoint openai.api_type = 'azure' openai.api_version = api_version
raw_llm_response, validated_response,*rest = guard( #openai.ChatCompletion.create, openai.chat.completions.create, prompt_params={"document": content[:6000]}, #engine="text-davinci-003", model='gpt-35-turbo-1106', max_tokens=2048, temperature=0.3, )
Expected behavior I expect it to call llm just like it worked with openai==0.28 (openai.ChatCompletion.create) but I get this error instead:
TypeError: create() takes 1 argument(s) but 2 were given
I realized that this was fixed for openAI by setting the api_key through os.environ. But how can I do it for AzureOpenAI?
Library version: Guardrails-ai 0.3.2 openai 1.12.0
Thanks.
This seems more like it's passing too many requests to the create constructor. Can you try removing temperature and max_tokens and see if it works?
Thanks for your reply. I removed temperature and max_tokens but still have the same error. This error is similar to these issues that were previously raised for using openai>1.x:
https://github.com/guardrails-ai/guardrails/issues/514#issue-2042182930 https://github.com/guardrails-ai/guardrails/issues/504
but I am wondering what the solution is for azure openai where these parameters should be set in order to call openai.chat.completions: openai.api_type = "azure" openai.api_version = "2023-05-15" openai.api_base = os.environ.get("AZURE_OPENAI_API_BASE") openai.api_key = os.environ.get("AZURE_OPENAI_API_KEY")
according to https://www.guardrailsai.com/docs/integrations/azure_openai. I even set these parameters like this but still get the same error. os.environ["OPENAI_API_TYPE"] = 'azure' os.environ["OPENAI_API_VERSION"] = api_version os.environ["AZURE_OPENAI_API_KEY"] = api_key os.environ["AZURE_OPENAI_ENDPOINT"] = api_base
Thanks.
@shima-khoshraftar - Facing the similar issue. Could you please tell me what version of guardrails is working with openai v0.28 currently?
@Aman0509 The latest release of guardrails works with Azure openai with openai==0.28.
I am wondering if there is any update to this issue? I am also trying to use guardrails with Azure OpenAI and openai>1.0.0.
You can now use AzureOpenAI with guardrails using litellm
. Please follow this example (just substitute with AzureOpenAI) instead. Please let us know here if there are any issues.
Please follow this example (just substitute with AzureOpenAI) instead. Sorry for the confusion, what example are you referring to here?
My bad. I thought I added the link! Just updated the comment.
Thanks for letting us know about the update and sending the link. However, the link does not seem to work, It can not find examples/litellm_example.ipynb and throws a page not found error. Could you please update the link? Thanks.
@shima-khoshraftar I'm also looking at this example since I would like to use guardails-ai
with openai
> 1
It looks like @thekaranacharya is referring to litellm
package (link).
They have a quick tutorial here
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue was closed because it has been stalled for 14 days with no activity.