GPTCache icon indicating copy to clipboard operation
GPTCache copied to clipboard

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.

Results 107 GPTCache issues
Sort by recently updated
recently updated
newest added

### Current Behavior from langchain.embeddings import HuggingFaceBgeEmbeddings `model_name = "BAAI/bge-small-en" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': False} hf = HuggingFaceBgeEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) cache_base = CacheBase('sqlite') vector_base =...

### Documentation Link https://github.com/zilliztech/GPTCache/blob/main/examples/README.md#How-to-use-GPTCache-server ### Describe the problem My database service is running in another docker container, I tried to create a separate GPTCache server, but couldn't find the documentation....

### Is your feature request related to a problem? Please describe. Currently, when GPTCache evaluates similarity, it retrieves cached answers that meet the similarity threshold. However, in many generation scenarios,...

good first issue
hacktoberfest

### Is your feature request related to a problem? Please describe. Currently, GPTCache achieves an accuracy of approximately 80% in optimal conditions. However, during daily usage, it often returns unsatisfactory...

good first issue
hacktoberfest

### Is your feature request related to a problem? Please describe. Currently, the GPTCache service uses a cache object to manage cached data. If GPTCache serves multiple users, user A's...

good first issue
hacktoberfest

### Is your feature request related to a problem? Please describe. When a cache miss occurs, that is, the question does not match the cached answer, cache correction is supported....

good first issue
hacktoberfest

### Current Behavior I am getting this error when using AzureChatOpenAI from Langchain I tried implementing the GPT Similarity cache mentioned in the langchain page -https://python.langchain.com/docs/integrations/llms/llm_caching, but getting below error....

### What would you like to be added? I am using GPTCache Server and use `/put` and `/get` primarily . In my use case, there are multiple user utilizing this...

### Current Behavior When I would like to use gptcache as langchan cache but found the below error message: ``` File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py", line 361, in acall raise e File "/Users/xxx/Library/Python/3.9/lib/python/site-packages/langchain/chains/base.py",...

### Is your feature request related to a problem? Please describe. I'm using langchain and started a GPTCache integration, but after a few attempts I did manage to config everything...