Add support for additional LLM and embedding providers
Description
The framework currently supports several LLM providers (OpenAI, Anthropic, Ollama, watsonx.ai, ...), but there are many other important providers that would benefit the community. This issue is about identifying and implementing support for additional providers that are widely used but not yet supported.
Task
- Identify LLM provider(s) that we currently do not support but would add significant value to the framework if we did
- Implement support for identified provider(s)
Hi I'd like to contribute, can I take on this issue?
@LunkadV absolutely! let us know what providers you are looking to integrate
@jenna-winkler I can add support for Cloud Qwen models and DeepSeek models
Great @LunkadV . Go for it.
Hi I sent in a PR over the weekend, please let me know if anything needs tweaking
Great work on the PR — I really appreciate the effort you’ve put in! I initially thought the providers had their own special handling, but since they’re OpenAI-compatible, it seems we could use the OpenAIChatModel class directly. From what I can tell, there doesn’t appear to be a difference between the two approaches. Could you clarify what additional value this implementation provides?
Essentially it's just letting the ChatModel know what to search for in the .env for it's url and api_key, as well as setting a default model and letting the user call a ChatModel with the name of the provider they are using.
Since almost all the adapters are extending the LiteLLMChatModel if you wanted to reduce code, most of them code be removed and the LiteLLMChatModel can be set to search the env for a generic api key and base url.
So I suppose it's a minor convenience to any user, if they are going to use a Qwen model they can create a QwenChatModel in their code instead of a generic ChatModel.
I see, therefore the DeepSeek provider is fine.
But what about the qwen provider? I can't see such provider in the LiteLLM. I found only this https://docs.litellm.ai/docs/providers/dashscope. The implementation in #1281 uses openai and it does not even set the base_url. Either update it to the dashscope or remove.
@LunkadV are working on this, else can i work on this @jenna-winkler ?
I see, therefore the DeepSeek provider is fine.
But what about the
qwenprovider? I can't see such provider in the LiteLLM. I found only this https://docs.litellm.ai/docs/providers/dashscope. The implementation in #1281 usesopenaiand it does not even set thebase_url. Either update it to thedashscopeor remove.
That is a good point, I've updated it to Dashcope and pushed
Hi, @jenna-winkler. I'm Dosu, and I'm helping the beeai-framework team manage their backlog and am marking this issue as stale.
Issue Summary:
- You proposed expanding the framework with more LLM and embedding providers.
- Contributor LunkadV submitted a PR adding Cloud Qwen (later Dashscope) and DeepSeek support.
- Maintainer Tomas2D suggested using the existing OpenAIChatModel class due to compatibility and requested clarification on separate provider classes.
- LunkadV explained the convenience of provider-specific classes for environment variable handling and updated the implementation.
- Another user expressed interest in contributing if LunkadV does not continue.
Next Steps:
- Please let me know if this issue is still relevant to the latest version of beeai-framework by commenting here.
- If I don’t hear back within 7 days, I will automatically close this issue.
Thanks for your understanding and contribution!