Awwtifishal
Awwtifishal
This still happens, it should be reopened.
I was testing the RPC feature just now and I was considering making a cache tensor feature if it wasn't already in the works. I was thinking about the same...
Thank you. I would have put some example in a more visible location, such as in the release notes or the readme. I've eventually found examples [here](https://github.com/GibsonAI/memori/blob/main/docs/core-concepts/overview.md#provider-configuration) and [here](https://github.com/GibsonAI/memori/blob/c8bb3445200ca96afd8aa04502a480b22456676d/docs/contributing.md#provider-testing).
It now has the ability to connect to any OpenAI-compatible endpoint (which is the lingua franca of LLMs, supported by basically everybody). The documentation is a little bit hidden, though....
Looking at [this commit](https://github.com/NevaMind-AI/memU/commit/044d9b2b96d7cfb39c052b99a9bdc93d25e27b6d) I'm not sure if you understood my message. All we need is just [that](https://github.com/NevaMind-AI/memU/commit/044d9b2b96d7cfb39c052b99a9bdc93d25e27b6d), but for embedding instead of the LLM. Just an env variable for...
I managed to make a proxy that can serve both the LLM and embeddings from the same endpoint (openai compatible), and it seems that the server tests work. I'm not...
Is this feature still in the works?