SimFG

Results 261 comments of SimFG

we have published the docker image, which means you can use GPTCache in any language, details: https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md#use-gptcache-server so i will close the issue

qdrant is on the list, please stay tuned.

@parthvnp Thanks your contribution!!! The next GPTCache version will support the qdrant vector database. @DumoeDss @Torhamilton @dagthomas

@Adam-Gibbs Thanks your attention. Contributions are also welcome if possible. Of course, we will also include this in our plan

The latest version has supported the mongo db as the cache store

maybe you can try to use [GPTCache](https://github.com/zilliztech/GPTCache). It can provide similar search, customize embedding function, provide storage function, and customize similar evaluation function for cached results, which can control cache...

/lgtm /approve

You can try to use the GPTCache `api`, which provides `get` and `put` methods. a simple example: ```python from gptcache.adapter.api import put, get, init_similar_cache init_similar_cache() put(question, answer) get(similar_question) ``` Of...

now you can try to use the context processor to handle the long prompt, like: ``` from gptcache.processor.context.summarization_context import SummarizationContextProcess from gptcache import cache context_process = SummarizationContextProcess() cache.init( pre_embedding_func=context_process.pre_process, ......

The problem seems to be an onnx version problem, I will check it out