[Bug]: Unable to use local models for agents - OpenAIError: The api_key client option must be set
Describe the bug
I am currently facing an issue where I cannot use local models for my agents in the AutoGen library. Whenever I try to run my application, I receive the following error:
OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
This issue prevents me from utilizing local models without an OpenAI API key.
Any guidance on how to configure AutoGen to work with local models without needing an OpenAI API key would be greatly appreciated.
Steps to reproduce
This issue prevents me from utilizing local models without an OpenAI API key. Here are the steps to reproduce the error:
Set up the AutoGen environment as per the documentation. Attempt to configure and run an agent using a local model.
Model Used
GPT2, Mistral, ...
Expected Behavior
The application should run successfully using local models without requiring an OpenAI API key.
Screenshots and logs
No response
Additional Information
No response
I ran into this and the simple "fix" is to put a non empty API key into the llm_config_list. It can be any string, it just needs to exist.
In this example here they just fill it with "Not Needed".
I agree it would be nice to not have it be a requirement in these cases though.
Hey @Armanasq, what are you using to run the local model? LM Studio, LiteLLM, Ollama, etc.?
Hi @marklysze Thanks for your reply. I intend to directly use the Hugging Face pipeline (from transformers import pipeline). Is it possible?
Hey @Armanasq, hmmmm, I'm really not sure to be honest. If it has an OpenAI compatible API there may be a chance, try what @PhysWiz314 recommended and set api_key='notneeded' and please let us know how you go.
Hey @Armanasq, hmmmm, I'm really not sure to be honest. If it has an OpenAI compatible API there may be a chance, try what @PhysWiz314 recommended and set api_key='notneeded' and please let us know how you go.
Thanks @marklysze. I will do that. Do you have any suggestions regarding running local models? Are there any alternative or suggested methods to connect local models with autogen?
I'm running Ollama locally (and using my graphics card), I find that quite good. We are working on a specific AutoGen client class for Ollama, PR #3056.
I'm using Ollama to run local models but encountering the same error. As a quick workaround, I set a dummy environment variable with the required name before starting autogen studio.
MacOS
export OPENAI_API_KEY="your_api_key_here"
Windows Cmd
set OPENAI_API_KEY=your_api_key_here
Windows PowerShell
$env:OPENAI_API_KEY="your_api_key_here"
Linux
export OPENAI_API_KEY="your_api_key_here"