autogen icon indicating copy to clipboard operation
autogen copied to clipboard

[Bug]: Unable to use local models for agents - OpenAIError: The api_key client option must be set

Open Armanasq opened this issue 1 year ago • 7 comments

Describe the bug

I am currently facing an issue where I cannot use local models for my agents in the AutoGen library. Whenever I try to run my application, I receive the following error:

OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable

This issue prevents me from utilizing local models without an OpenAI API key.

Any guidance on how to configure AutoGen to work with local models without needing an OpenAI API key would be greatly appreciated.

Steps to reproduce

This issue prevents me from utilizing local models without an OpenAI API key. Here are the steps to reproduce the error:

Set up the AutoGen environment as per the documentation. Attempt to configure and run an agent using a local model.

Model Used

GPT2, Mistral, ...

Expected Behavior

The application should run successfully using local models without requiring an OpenAI API key.

Screenshots and logs

No response

Additional Information

No response

Armanasq avatar Jul 31 '24 05:07 Armanasq

I ran into this and the simple "fix" is to put a non empty API key into the llm_config_list. It can be any string, it just needs to exist. In this example here they just fill it with "Not Needed". I agree it would be nice to not have it be a requirement in these cases though.

PhysWiz314 avatar Jul 31 '24 20:07 PhysWiz314

Hey @Armanasq, what are you using to run the local model? LM Studio, LiteLLM, Ollama, etc.?

marklysze avatar Aug 01 '24 04:08 marklysze

Hi @marklysze Thanks for your reply. I intend to directly use the Hugging Face pipeline (from transformers import pipeline). Is it possible?

Armanasq avatar Aug 01 '24 05:08 Armanasq

Hey @Armanasq, hmmmm, I'm really not sure to be honest. If it has an OpenAI compatible API there may be a chance, try what @PhysWiz314 recommended and set api_key='notneeded' and please let us know how you go.

marklysze avatar Aug 01 '24 05:08 marklysze

Hey @Armanasq, hmmmm, I'm really not sure to be honest. If it has an OpenAI compatible API there may be a chance, try what @PhysWiz314 recommended and set api_key='notneeded' and please let us know how you go.

Thanks @marklysze. I will do that. Do you have any suggestions regarding running local models? Are there any alternative or suggested methods to connect local models with autogen?

Armanasq avatar Aug 01 '24 05:08 Armanasq

I'm running Ollama locally (and using my graphics card), I find that quite good. We are working on a specific AutoGen client class for Ollama, PR #3056.

marklysze avatar Aug 01 '24 10:08 marklysze

I'm using Ollama to run local models but encountering the same error. As a quick workaround, I set a dummy environment variable with the required name before starting autogen studio.

MacOS export OPENAI_API_KEY="your_api_key_here"

Windows Cmd set OPENAI_API_KEY=your_api_key_here

Windows PowerShell $env:OPENAI_API_KEY="your_api_key_here"

Linux export OPENAI_API_KEY="your_api_key_here"

ahmetkakici avatar Aug 07 '24 06:08 ahmetkakici