bug: showing error when i use embed chain with open ai proxy base and key
🐛 Describe the bug
import os
from embedchain import App
os.environ["OPENAI_API_BASE"] = "https://chimeragpt.adventblocks.cc/api/v1"
os.environ["OPENAI_API_KEY"] = "my_key"
elon_musk_bot = App()
# Embed Online Resources
elon_musk_bot.add("https://en.wikipedia.org/wiki/Elon_Musk")
elon_musk_bot.add("https://www.tesla.com/elon-musk")
response = elon_musk_bot.query("How many companies does Elon Musk run?")
print(response)
# Answer: 'Elon Musk runs four companies: Tesla, SpaceX, Neuralink, and The Boring Company.'
error -
openai.error.AuthenticationError: Incorrect API key provided: E*******************************4. You can find your API key at https://platform.openai.com/account/api-keys.
I think we just aren't processing the proxy path at all. This shouldn't be too hard of a fix.
Same error, @cachho
i need someting like openai.api_base = "https://chimeragpt.adventblocks.cc/api/v1"
agree it does not pickup the base url . Also n eed to ensure that it calls the AzureChatOpenAI function when passed the base url. The api_key in such instances is a b64 string that embeds the username and openai key. the parameters to be passed to AzureChatOpenAI are deployment_name, temperature, openai_api_base, openai_api_key, openai_aapi_version, openai_api_type .
you can use langchain instead of proxy open_api_key and open_api_url ,eg:
openai_use_model = ChatOpenAI( model=OPENAI_MODEL, temperature=0.2, max_tokens=2000, openai_api_key=OPENAI_API_KEY, openai_api_base=OPENAI_BASE_URL )
config = { "llm": { "provider": "langchain", "config": { "model": openai_use_model } } }
Closing as fixed.