[Feature Request]: Remove the dependency on aichat
While relying on Aichat has been good for bootstrapping, going forward it doesn't make much sense to continue relying on it. The 'API' being used (cmd line invocations of aichat) is too limiting and really isn't how it's supposed to be used.
Perhaps aichat could be extended (or parts of it extracted) to export its functionality as a library crate? It already supports a wide variety of different LLM interfaces and it seems like a waste to reimplement that separately.
@arcuru Thanks for this project and leading the way
Have you considered interfacing with an openai API compatible backend such as llama-cpp-python https://github.com/abetlen/llama-cpp-python
It supports multimodal input as well as function calling
There are other backends that are self-hosted openai API compatible . By picking a defacto standard API interface it opens up the possibility of interfacing with other backends in the future
https://github.com/matatonic/openedai-vision https://github.com/matatonic/openedai-speech
I just pushed an update to this that allows adding any openai compatible backend. You enter the api_key and the api_base and it just works.
Aichat is now conceptually just another backend, and will work alongside any OpenAI compatible backend too.
Thanks for your input!
The commit adding this is here: https://github.com/arcuru/chaz/commit/3c9934378755283d6659281e27286c1af414d732