aisuite
aisuite copied to clipboard
Why re-invent the wheel?
Litellm and probably a few others have already done all of this.. this energy and time would be better spent on litellm.
I mean I guess there's a couple possible reasons but I don't see them being mentioned so.
You bring up a good point—tools like Litellm and others are already out there and do a great job. But honestly, history is full of success stories that started with someone "reinventing the wheel." Python, for example, came along when we already had C, Perl, and Java, yet it succeeded because it focused on simplicity and readability. It wasn’t just about what it did, but how it did it.
The thing is, progress doesn’t always come from doing something completely new. Sometimes it’s about taking an existing idea and iterating on it—making it a little better, a little different, or just more aligned with a specific need. And often, the act of building itself leads to a deeper understanding of the problem space and opens doors to innovation.
Sure, Litellm and others are great, but there’s always room for someone to come in with fresh eyes and create something unexpected. Supporting that kind of work isn’t just about the outcome—it’s about fostering curiosity, creativity, and growth. Even if the result isn’t a revolution, the process itself is invaluable. Who knows? Maybe this iteration will be the one that sparks something big.
Plus, these tools themselves are also reinventing the wheel in a way. Litellm and others often act as wrappers around powerful tools like OpenAI, Mistral, and others. But think about it—weren’t these tools also reinventing the wheel when OpenAI first made its mark? OpenAI wasn’t the first AI company, and tools like Mistral built on those foundations, refining the approach, targeting specific needs, and pushing boundaries. Reinvention is just part of how progress works.
Looking into LiteLLM's source makes me reinvent the wheel
makes
Cool reply. Tools only become the tools since they apply specifically.
Are there any highlights on this project regarding the comparison?
Yeah I mean that's the real question, why is this wheel different?
On Sat, Nov 30, 2024, 8:11 AM MartinGuo @.***> wrote:
makes
Cool reply. Tools only become the tools since they apply specifically. Are there any highlights on this project regarding the comparison?
— Reply to this email directly, view it on GitHub https://github.com/andrewyng/aisuite/issues/113#issuecomment-2508995064, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADU2N5XAF4NDW5GULTDNDUT2DHIRPAVCNFSM6AAAAABSYHC5CGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKMBYHE4TKMBWGQ . You are receiving this because you authored the thread.Message ID: @.***>
Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers.
Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer.
First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back
Disclaimer: I have not (yet) used any of these 2 packages
aisuite code is simple and clean
The main point is that aisuite is light, simple and focused on 1 task: providing a simple python API to query completions API of LLM providers.
From the pyproject.toml it has 0 mandatory dependencies, only dependencies are optional when you want to query a specific LLM provider (e.g. mistralai if querying mistral provider, which is nice because then it is provides a unified API over every provider APIs and packages). Note they forgot to add the httpx dependency that is required for fireworks provider
How to add a new provider is clear and simple
This looks like a decently built lib overall. It provides a few parent abstract classes that are used to define each provider. As simple as it should be, and could easily be extended to add some more functionalities
That said it is currently missing some basic features like streaming response for all providers, and function calling. Also no async functions for completion (tbh most of those features would be easy to add with the current state of the codebase)
litellm is not lite at all
It is doing a few more things than just giving a unified API for completion over multiple LLM providers (not a lot though):
- deployable proxy HTTP API (I don't need this)
- a solution for caching (that could be interesting, but I don't need this, I'd rather do it myself if I need it)
- support function/tool calling when available
- routing to the right LLM (I am not interested, and if I need it I will probably implement it myself with my own logic)
As someone who likes well coded projects there are a ton of red flags in their codebase 🚩
-
Despite the many thousands lines of code, there is no clear coherent structure, LLM providers don't have a unified parent abstract class (this is really programming 101 tbh, not sure how they missed that), e.g. see fireworks AI implementation vs groq vs
azure_ai.pyvsAzureOpenAI.py(note the complete mess even in the file nomenclature... That is full on software gore). As a bonus they are redefining aLiteLLMBaseclass many times at different places, with many different use cases, it's really scary -
The list of mandatory dependencies for
litellmis way too long and includes many dependencies that are not required when you just want a python API to query a LLM provider for completion (which is the main feature advertised for their tool): https://github.com/BerriAI/litellm/blob/main/pyproject.toml it seems like the maintainers don't really understand optional dependencies and this scares meclickis not necessary, only if you want to use litellm as a CLI, it should be optionaljinja2is not necessary (it's for templating strings). And if you search why it is used, then you get really confused: it's used for a 3000 linesfactory.pyfile in aprompt_templatesfolder that is itself in allmsfolder which mostly just contains folders to LLM providers, apart for this one folder aboutprompt_templatesnot sure why it's here. And there is nothing about prompt templates in their docs website- They require 2 different dependencies for sending HTTP requests:
aiohttpandrequests. Just pick one and use it. Look, in the same file they importrequestsandhttpxto finally never userequestsand only usehttpx... Which is not even defined as a dependency of the package (I guess it's the dependency of one of the dependencies, but it should be explicitly added to dependencies list if it explicitly imported)
-
Another red flag: there is a symlinks to
requirements.txtin the main package folder: they probably have a reason, but it is most probably not a good one, and should be dealt differently than creating a simlink there -
Configuration for linting tools is a complete mess they use
flake8andisortin a project created in 2023... No python dev does this anymore,ruffis the new standard. Funnily there is also aruff.tomlfile, but no trace ofruffbeing used. They should just userufffor everything -
Linting is not even properly done when you see the many unused imports all over the codebase (e.g. https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/types/llms/ollama.py#L4)
-
__init__.pyfiles are a mess, sometimes there sometimes not there
There is way too much code poorly structured just for a simple wrapper for 1 function (completion) implemented for various API providers
That's not how good maintainable packages are built. Don't trust the number of stars during the AI trend they don't always reflect on code quality ("oh the features they are proposing in their readme look cool I'll star it"). A lot of people are trying to quickly capitalize on the trend without having the competencies.
What are the other options?
Probably the llm lib from simonw is the closest lib doing similar job, while keeping it simple, it was built with CLI in mind but also provide a python API: https://llm.datasette.io/en/stable/python-api.html it supports streaming, and have a plugin system to add new providers. I wish the click dependency would be behind a CLI flag though
LlamaIndex also enables to do completions for various providers (and they also provide a lot more for RAG). But also comes with a ton of dependencies, and not sure about how good the codebase is.
LangChain seems to be falling out of fashion due to pushing their LCEL language that is not pythonic at all, confusing, and really limited in capacities, also now they are also pushing for LangGraph which is very confusing. Their azure connector is not even up to date anymore and cannot be used for the latest azure pay as you go deployments.
imo python devs just want to write python code with functions, if conditions and loops. No need for a complete closed framework, we just need wrapper for LLMs APIs and vectorstore providers, that's it. The rest we can do it ourselves.
Also all those companies (LlamaIndex, LangChain, litellm) are trying to profit from the AI trend by providing basic wrappers over simple APIs with some extra sugar in the AWS cloud. They don't provide much value to be honest, and are prone to bring in breaking changes and walled gardens if that's good for their business. So I would rather trust a community led effort for such a simple wrapper system. You don't need a whole company for maintaining this kind of tool, devs will be sending PRs by themselves, all you need is a good structure and good test suite to quickly and safely integrate contributions.
Conclusion
Good developers value simplicity over thousands of lines of codes and dependency hell.
After reviewing the code of litellm I know for sure I would never touch this library with a ten meters long keyboard. It just screams "poorly coded package that will be a pain to maintain".
Imo the main questions with aisuite is: will it be properly maintained over time? PRs seems to be already accumulating a bit since last week. But there is a lot of PR that looks interesting and addressing some of the missing features already
But even if it is not maintained the API is so simple that you can easily fork it and maintain the providers you care about just for you. Just so you have a unified API in all your LLM projects that you can control
Thank you for the thoughtful analysis.
I agree litellm is no longer lite. But I imagine it was at some point. Also Ive already seen in this repository that there are plans to start expanding the codebase to cover more features. I imagine this is exactly how litellm started out.
"oh we will keep it lite, clean and simple!"
Just like this repo is now.
Your other comments around industry standards / what people should do are your opinion / possibly rules you dictate to your subordinates.
This my friend is the wilds of github.. people from all nations, socio-economic status, education levels, (dis)abilities and a thousand other different qualifiers contribute here. If the repo owners want to be inclusive that not everyone is a CS major with 10 years in industry (lucky bastards). Then great, if not.. maybe they should host it on their own git.
Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers.
Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer.
First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back
Disclaimer: I have not (yet) used any of these 2 packages
aisuitecode is simple and cleanThe main point is that
aisuiteis light, simple and focused on 1 task: providing a simple python API to query completions API of LLM providers.From the
pyproject.tomlit has 0 mandatory dependencies, only dependencies are optional when you want to query a specific LLM provider (e.g.mistralaiif querying mistral provider, which is nice because then it is provides a unified API over every provider APIs and packages). Note they forgot to add thehttpxdependency that is required for fireworks providerHow to add a new provider is clear and simple
This looks like a decently built lib overall. It provides a few parent abstract classes that are used to define each provider. As simple as it should be, and could easily be extended to add some more functionalities
That said it is currently missing some basic features like streaming response for all providers, and function calling. Also no async functions for completion (tbh most of those features would be easy to add with the current state of the codebase)
litellmis not lite at allIt is doing a few more things than just giving a unified API for completion over multiple LLM providers (not a lot though):
* deployable proxy HTTP API (I don't need this) * a solution for caching (that could be interesting, but I don't need this, I'd rather do it myself if I need it) * support function/tool calling when available * routing to the right LLM (I am not interested, and if I need it I will probably implement it myself with my own logic)As someone who likes well coded projects there are a ton of red flags in their codebase 🚩
1. Despite the many thousands lines of code, **there is no clear coherent structure**, LLM providers don't have a unified parent abstract class (this is really programming 101 tbh, not sure how they missed that), e.g. see [fireworks AI implementation](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/fireworks_ai/chat/fireworks_ai_transformation.py) vs [groq](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/groq/chat/transformation.py) vs [`azure_ai.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/azure_ai/chat/transformation.py) vs [`AzureOpenAI.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/AzureOpenAI/chat/gpt_transformation.py) (note the complete mess even in the file nomenclature... That is full on software gore). As a bonus they are redefining a [`LiteLLMBase` class](https://github.com/search?q=repo%3ABerriAI%2Flitellm%20LiteLLMBase&type=code) many times at different places, with many different use cases, it's really scary 2. The list of mandatory dependencies for `litellm` is **way too long** and includes many dependencies that are not required when you just want a python API to query a LLM provider for completion (which is the main feature advertised for their tool): https://github.com/BerriAI/litellm/blob/main/pyproject.toml it seems like the maintainers don't really understand optional dependencies and this scares me * `click` is not necessary, only if you want to use litellm as a CLI, it should be optional * `jinja2` is not necessary (it's for templating strings). And if you search why it is used, then you get really confused: it's used for a 3000 lines [`factory.py`](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/prompt_templates/factory.py#L10) file in a `prompt_templates` folder that is itself in a `llms` folder which mostly just contains folders to LLM providers, apart for this one folder about `prompt_templates` not sure why it's here. And there is nothing about prompt templates in their docs website * They require 2 different dependencies for sending HTTP requests: `aiohttp` and `requests`. Just pick one and use it. Look, in the [same file](https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/llms/vertex_ai_and_google_ai_studio/vertex_ai_non_gemini.py#L11C8-L11C16) they import `requests` and `httpx` to finally never use `requests` and only use `httpx`... Which is not even defined as a dependency of the package (I guess it's the dependency of one of the dependencies, but it should be explicitly added to dependencies list if it explicitly imported) 3. Another red flag: there is a [symlinks to `requirements.txt` in the main package folder](https://github.com/BerriAI/litellm/blob/main/litellm/requirements.txt): they probably have a reason, but it is most probably not a good one, and should be dealt differently than creating a simlink there 4. Configuration for **linting tools is a complete mess** they use `flake8` and `isort` in a project created in 2023... No python dev does this anymore, `ruff` is the new standard. Funnily there is also a `ruff.toml` file, but no trace of `ruff` being used. They should just use `ruff` for everything 5. Linting is not even properly done when you see the many unused imports all over the codebase (e.g. https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/types/llms/ollama.py#L4) 6. `__init__.py` files are a mess, sometimes there sometimes not thereThere is way too much code poorly structured just for a simple wrapper for 1 function (completion) implemented for various API providers
That's not how good maintainable packages are built. Don't trust the number of stars during the AI trend they don't always reflect on code quality ("oh the features they are proposing in their readme look cool I'll star it"). A lot of people are trying to quickly capitalize on the trend without having the competencies.
What are the other options?
Probably the
llmlib from simonw is the closest lib doing similar job, while keeping it simple, it was built with CLI in mind but also provide a python API: https://llm.datasette.io/en/stable/python-api.html it supports streaming, and have a plugin system to add new providers. I wish theclickdependency would be behind aCLIflag thoughLlamaIndex also enables to do completions for various providers (and they also provide a lot more for RAG). But also comes with a ton of dependencies, and not sure about how good the codebase is.
LangChain seems to be falling out of fashion due to pushing their LCEL language that is not pythonic at all, confusing, and really limited in capacities, also now they are also pushing for LangGraph which is very confusing. Their azure connector is not even up to date anymore and cannot be used for the latest azure pay as you go deployments.
imo python devs just want to write python code with functions, if conditions and loops. No need for a complete closed framework, we just need wrapper for LLMs APIs and vectorstore providers, that's it. The rest we can do it ourselves.
Also all those companies (LlamaIndex, LangChain, litellm) are trying to profit from the AI trend by providing basic wrappers over simple APIs with some extra sugar in the AWS cloud. They don't provide much value to be honest, and are prone to bring in breaking changes and walled gardens if that's good for their business. So I would rather trust a community led effort for such a simple wrapper system. You don't need a whole company for maintaining this kind of tool, devs will be sending PRs by themselves, all you need is a good structure and good test suite to quickly and safely integrate contributions.
Conclusion
Good developers value simplicity over thousands of lines of codes and dependency hell.
After reviewing the code of
litellmI know for sure I would never touch this library with a ten meters long keyboard. It just screams "poorly coded package that will be a pain to maintain".Imo the main questions with
aisuiteis: will it be properly maintained over time? PRs seems to be already accumulating a bit since last week. But there is a lot of PR that looks interesting and addressing some of the missing features alreadyBut even if it is not maintained the API is so simple that you can easily fork it and maintain the providers you care about just for you. Just so you have a unified API in all your LLM projects that you can control
A lot of these litellm points are true (especially since I had to contribute a small fix there and dive into their codebase). It feels like they are moving fast trying to add new features and forgo code quality and maintainability.
But it would be just good to hear from the authors as to what the intent and direction of this repo is, as it would give some insights as to where to contribute.
@andrewyng Loved your classes! And I would love to hear your thoughts or your teams thoughts on this thread! What do you think? How will aisuite differentiate itself from the others?
Hi @vemonet , thank you for the feedback on litellm. Here’s what we’ve done / are doing about this. Is this what you wanted?
1. ‘LLM providers don't have a unified parent abstract class’
All chat providers (exc. Bedrock) now inherit from a parent abstract class
For reference, here's the refactor on:
2. 'no clear coherent structure for LLM providers'
We refactored llms/ to make this simpler - https://github.com/BerriAI/litellm/tree/main/litellm/llms
Standard naming convention: All folders in llms/ are now named after their litellm provider name. enforced test
Common enforced structure: https://github.com/BerriAI/litellm/tree/30e147a315d29ba3efe61a179e80409a77754a42/litellm/llms/watsonx
-
each mapped endpoint is in a separate folder -
chat/=> endpoint accepts chat completion messages - e.g. watsonx's/text/chat,completion=> endpoint requires translation to single prompt str - e.g. watsonx's/text/generation -
each endpoint folder has a separate handler.py (http calling) and transformation.py (core LLM translation logic)
3. ‘As a bonus they are redefining a LiteLLMBase class many times at different place’
Removed redefiniton: Defined in just 1 place
Clarity on usage: Renamed it to LiteLLMPydanticObjectBase to make it clear that this is the base pydantic object for the repo
4. ‘Another red flag: there is a symlinks to requirements.txt in the main package folder’
Removed the symlinks to requirements.txt
5. ‘Configuration for linting tools is a complete mess’
LiteLLM follows the Google Python Style Guide.
We run:
- Ruff for formatting and linting checks
- Mypy + Pyright for typing 1, 2
- Black for formatting
- isort for import sorting
If there's any way we can improve further here, let me know.
[PLANNED FOR NEXT WEEK]
1. ‘The list of mandatory dependencies for litellm is way too long’
- Single HTTP Library: post-refactor, we can now remove 'aiohttp', 'requests' and just use httpx for our core calling logic (same as openai sdk). This should be done by next week.
- Removing proxy-deps: 'click' can also be moved into the separate
litellm[proxy]set of dependencies - Clarity on ‘jinja2’: This is required for prompt templating (as you pointed out). This is used for several llm providers (e.g. huggingface) which expose endpoints that only accept a
promptfield. We don’t plan on removing this today, as it’s used in several places. Any suggestions for reducing our need for this / being able to remove the requirement are welcome.
2. Migrate Bedrock to Base Config
This would move all Chat LLM providers to the base config.
Contributions to improve LiteLLM’s code quality/linting/etc. are welcome!
-litellm maintainer
@TashaSkyUp - Hi, one of our aims with starting development of aisuite was to provide a simple way to use multiple providers. There are few planned features and we are still deciding on a roadmap based on feedback and traction. If you have a feature request, please open an issue for the same.
Please wait for the next set of features to be announced/added. Differentiators will become evident over the course of next few releases. Thanks for using aisuite and giving feedback. We are committed to maintain and enhance this library long term.
LiteLLM is the reason why I look for AIsuite. I just need a simple client, not a whole proxy or gateway server like LiteLLM.
Hi @TashaSkyUp, I am not contributor here or in any of the existing libraries, but I am currently checking the available solutions to easily query different LLM providers.
Also I like to look into other people codebase, and judge it, so I can give you some elements of response from the point of view of a python package developer.
First my use-case: all I want is a python API for sending messages to LLM providers and getting a completion back
Disclaimer: I have not (yet) used any of these 2 packages
aisuitecode is simple and cleanThe main point is that
aisuiteis light, simple and focused on 1 task: providing a simple python API to query completions API of LLM providers.From the
pyproject.tomlit has 0 mandatory dependencies, only dependencies are optional when you want to query a specific LLM provider (e.g.mistralaiif querying mistral provider, which is nice because then it is provides a unified API over every provider APIs and packages). Note they forgot to add thehttpxdependency that is required for fireworks providerHow to add a new provider is clear and simple
This looks like a decently built lib overall. It provides a few parent abstract classes that are used to define each provider. As simple as it should be, and could easily be extended to add some more functionalities
That said it is currently missing some basic features like streaming response for all providers, and function calling. Also no async functions for completion (tbh most of those features would be easy to add with the current state of the codebase)
litellmis not lite at allIt is doing a few more things than just giving a unified API for completion over multiple LLM providers (not a lot though):
- deployable proxy HTTP API (I don't need this)
- a solution for caching (that could be interesting, but I don't need this, I'd rather do it myself if I need it)
- support function/tool calling when available
- routing to the right LLM (I am not interested, and if I need it I will probably implement it myself with my own logic)
As someone who likes well coded projects there are a ton of red flags in their codebase 🚩
Despite the many thousands lines of code, there is no clear coherent structure, LLM providers don't have a unified parent abstract class (this is really programming 101 tbh, not sure how they missed that), e.g. see fireworks AI implementation vs groq vs
azure_ai.pyvsAzureOpenAI.py(note the complete mess even in the file nomenclature... That is full on software gore). As a bonus they are redefining aLiteLLMBaseclass many times at different places, with many different use cases, it's really scaryThe list of mandatory dependencies for
litellmis way too long and includes many dependencies that are not required when you just want a python API to query a LLM provider for completion (which is the main feature advertised for their tool): https://github.com/BerriAI/litellm/blob/main/pyproject.toml it seems like the maintainers don't really understand optional dependencies and this scares me
clickis not necessary, only if you want to use litellm as a CLI, it should be optionaljinja2is not necessary (it's for templating strings). And if you search why it is used, then you get really confused: it's used for a 3000 linesfactory.pyfile in aprompt_templatesfolder that is itself in allmsfolder which mostly just contains folders to LLM providers, apart for this one folder aboutprompt_templatesnot sure why it's here. And there is nothing about prompt templates in their docs website- They require 2 different dependencies for sending HTTP requests:
aiohttpandrequests. Just pick one and use it. Look, in the same file they importrequestsandhttpxto finally never userequestsand only usehttpx... Which is not even defined as a dependency of the package (I guess it's the dependency of one of the dependencies, but it should be explicitly added to dependencies list if it explicitly imported)Another red flag: there is a symlinks to
requirements.txtin the main package folder: they probably have a reason, but it is most probably not a good one, and should be dealt differently than creating a simlink thereConfiguration for linting tools is a complete mess they use
flake8andisortin a project created in 2023... No python dev does this anymore,ruffis the new standard. Funnily there is also aruff.tomlfile, but no trace ofruffbeing used. They should just userufffor everythingLinting is not even properly done when you see the many unused imports all over the codebase (e.g. https://github.com/BerriAI/litellm/blob/13e8e94c3f0459876a34d6c4d2dc89fff05f5c9a/litellm/types/llms/ollama.py#L4)
__init__.pyfiles are a mess, sometimes there sometimes not thereThere is way too much code poorly structured just for a simple wrapper for 1 function (completion) implemented for various API providers
That's not how good maintainable packages are built. Don't trust the number of stars during the AI trend they don't always reflect on code quality ("oh the features they are proposing in their readme look cool I'll star it"). A lot of people are trying to quickly capitalize on the trend without having the competencies.
What are the other options?
Probably the
llmlib from simonw is the closest lib doing similar job, while keeping it simple, it was built with CLI in mind but also provide a python API: https://llm.datasette.io/en/stable/python-api.html it supports streaming, and have a plugin system to add new providers. I wish theclickdependency would be behind aCLIflag thoughLlamaIndex also enables to do completions for various providers (and they also provide a lot more for RAG). But also comes with a ton of dependencies, and not sure about how good the codebase is.
LangChain seems to be falling out of fashion due to pushing their LCEL language that is not pythonic at all, confusing, and really limited in capacities, also now they are also pushing for LangGraph which is very confusing. Their azure connector is not even up to date anymore and cannot be used for the latest azure pay as you go deployments.
imo python devs just want to write python code with functions, if conditions and loops. No need for a complete closed framework, we just need wrapper for LLMs APIs and vectorstore providers, that's it. The rest we can do it ourselves.
Also all those companies (LlamaIndex, LangChain, litellm) are trying to profit from the AI trend by providing basic wrappers over simple APIs with some extra sugar in the AWS cloud. They don't provide much value to be honest, and are prone to bring in breaking changes and walled gardens if that's good for their business. So I would rather trust a community led effort for such a simple wrapper system. You don't need a whole company for maintaining this kind of tool, devs will be sending PRs by themselves, all you need is a good structure and good test suite to quickly and safely integrate contributions.
Conclusion
Good developers value simplicity over thousands of lines of codes and dependency hell.
After reviewing the code of
litellmI know for sure I would never touch this library with a ten meters long keyboard. It just screams "poorly coded package that will be a pain to maintain".Imo the main questions with
aisuiteis: will it be properly maintained over time? PRs seems to be already accumulating a bit since last week. But there is a lot of PR that looks interesting and addressing some of the missing features alreadyBut even if it is not maintained the API is so simple that you can easily fork it and maintain the providers you care about just for you. Just so you have a unified API in all your LLM projects that you can control
does AI Suite offer
Hi @vemonet , thank you for the feedback on litellm. Here’s what we’ve done / are doing about this. Is this what you wanted?
1. ‘LLM providers don't have a unified parent abstract class’
All chat providers (exc. Bedrock) now inherit from a parent abstract class
For reference, here's the refactor on:
2. 'no clear coherent structure for LLM providers'
We refactored
llms/to make this simpler - https://github.com/BerriAI/litellm/tree/main/litellm/llmsStandard naming convention: All folders in llms/ are now named after their litellm provider name. enforced test
Common enforced structure: https://github.com/BerriAI/litellm/tree/30e147a315d29ba3efe61a179e80409a77754a42/litellm/llms/watsonx
- each mapped endpoint is in a separate folder -
chat/=> endpoint accepts chat completion messages - e.g. watsonx's/text/chat,completion=> endpoint requires translation to single prompt str - e.g. watsonx's/text/generation- each endpoint folder has a separate handler.py (http calling) and transformation.py (core LLM translation logic)
3. ‘As a bonus they are redefining a LiteLLMBase class many times at different place’
Removed redefiniton: Defined in just 1 place
Clarity on usage: Renamed it to
LiteLLMPydanticObjectBaseto make it clear that this is the base pydantic object for the repo4. ‘Another red flag: there is a symlinks to requirements.txt in the main package folder’
Removed the symlinks to requirements.txt
5. ‘Configuration for linting tools is a complete mess’
LiteLLM follows the Google Python Style Guide.
We run:
- Ruff for formatting and linting checks
- Mypy + Pyright for typing 1, 2
- Black for formatting
- isort for import sorting
If there's any way we can improve further here, let me know.
[PLANNED FOR NEXT WEEK]
1. ‘The list of mandatory dependencies for
litellmis way too long’
- Single HTTP Library: post-refactor, we can now remove 'aiohttp', 'requests' and just use httpx for our core calling logic (same as openai sdk). This should be done by next week.
- Removing proxy-deps: 'click' can also be moved into the separate
litellm[proxy]set of dependencies- Clarity on ‘jinja2’: This is required for prompt templating (as you pointed out). This is used for several llm providers (e.g. huggingface) which expose endpoints that only accept a
promptfield. We don’t plan on removing this today, as it’s used in several places. Any suggestions for reducing our need for this / being able to remove the requirement are welcome.2. Migrate Bedrock to Base Config
This would move all Chat LLM providers to the base config.
Contributions to improve LiteLLM’s code quality/linting/etc. are welcome!
-litellm maintainer
Does AIsuite offer other features like caching, routing, proxy server etc. which litellm provides ? Thats the reason for mostly using Litellm right?
LiteLLM is the reason why I look for AIsuite. I just need a simple client, not a whole proxy or gateway server like LiteLLM.
thanks for the feedback @dinhanhx. We wrote litellm to be a lightweight library for llm calling, since langchain felt bloated at the time, so this is good to know.
Will work on moving proxy code outside default pip package. Should be live by end of week hopefully.
@TashaSkyUp - Thanks for opening this thread. If the issue is resolved and if your question has been answered, can I close it?
Yes I think the thread has served its purpose. Thank you everyone.