bee-agent-framework icon indicating copy to clipboard operation
bee-agent-framework copied to clipboard

Add support for additional LLM and embedding providers

Open jenna-winkler opened this issue 4 months ago • 11 comments

Description

The framework currently supports several LLM providers (OpenAI, Anthropic, Ollama, watsonx.ai, ...), but there are many other important providers that would benefit the community. This issue is about identifying and implementing support for additional providers that are widely used but not yet supported.

Task

  1. Identify LLM provider(s) that we currently do not support but would add significant value to the framework if we did
  2. Implement support for identified provider(s)

jenna-winkler avatar Oct 23 '25 17:10 jenna-winkler

Hi I'd like to contribute, can I take on this issue?

LunkadV avatar Nov 06 '25 20:11 LunkadV

@LunkadV absolutely! let us know what providers you are looking to integrate

jenna-winkler avatar Nov 06 '25 20:11 jenna-winkler

@jenna-winkler I can add support for Cloud Qwen models and DeepSeek models

LunkadV avatar Nov 07 '25 00:11 LunkadV

Great @LunkadV . Go for it.

Tomas2D avatar Nov 07 '25 08:11 Tomas2D

Hi I sent in a PR over the weekend, please let me know if anything needs tweaking

LunkadV avatar Nov 11 '25 04:11 LunkadV

Great work on the PR — I really appreciate the effort you’ve put in! I initially thought the providers had their own special handling, but since they’re OpenAI-compatible, it seems we could use the OpenAIChatModel class directly. From what I can tell, there doesn’t appear to be a difference between the two approaches. Could you clarify what additional value this implementation provides?

Tomas2D avatar Nov 11 '25 09:11 Tomas2D

Essentially it's just letting the ChatModel know what to search for in the .env for it's url and api_key, as well as setting a default model and letting the user call a ChatModel with the name of the provider they are using.

Since almost all the adapters are extending the LiteLLMChatModel if you wanted to reduce code, most of them code be removed and the LiteLLMChatModel can be set to search the env for a generic api key and base url.

So I suppose it's a minor convenience to any user, if they are going to use a Qwen model they can create a QwenChatModel in their code instead of a generic ChatModel.

LunkadV avatar Nov 11 '25 09:11 LunkadV

I see, therefore the DeepSeek provider is fine.

But what about the qwen provider? I can't see such provider in the LiteLLM. I found only this https://docs.litellm.ai/docs/providers/dashscope. The implementation in #1281 uses openai and it does not even set the base_url. Either update it to the dashscope or remove.

Tomas2D avatar Nov 12 '25 13:11 Tomas2D

@LunkadV are working on this, else can i work on this @jenna-winkler ?

Vishnu-sai-teja avatar Nov 13 '25 14:11 Vishnu-sai-teja

I see, therefore the DeepSeek provider is fine.

But what about the qwen provider? I can't see such provider in the LiteLLM. I found only this https://docs.litellm.ai/docs/providers/dashscope. The implementation in #1281 uses openai and it does not even set the base_url. Either update it to the dashscope or remove.

That is a good point, I've updated it to Dashcope and pushed

LunkadV avatar Nov 13 '25 18:11 LunkadV

Hi, @jenna-winkler. I'm Dosu, and I'm helping the beeai-framework team manage their backlog and am marking this issue as stale.

Issue Summary:

  • You proposed expanding the framework with more LLM and embedding providers.
  • Contributor LunkadV submitted a PR adding Cloud Qwen (later Dashscope) and DeepSeek support.
  • Maintainer Tomas2D suggested using the existing OpenAIChatModel class due to compatibility and requested clarification on separate provider classes.
  • LunkadV explained the convenience of provider-specific classes for environment variable handling and updated the implementation.
  • Another user expressed interest in contributing if LunkadV does not continue.

Next Steps:

  • Please let me know if this issue is still relevant to the latest version of beeai-framework by commenting here.
  • If I don’t hear back within 7 days, I will automatically close this issue.

Thanks for your understanding and contribution!

dosubot[bot] avatar Nov 28 '25 16:11 dosubot[bot]