GPTCache icon indicating copy to clipboard operation
GPTCache copied to clipboard

Semantic cache for LLMs. Fully integrated with LangChain and llama_index.

Results 107 GPTCache issues
Sort by recently updated
recently updated
newest added

### Is your feature request related to a problem? Please describe. We are creating multiple GPT Assistants to support our customer service operations. Currently, there is only support for caching...

Hi! This is meant to correct problems with embedding in the latest versions of the OpenAI Python Library and GPTCache. The latest versions of the OpenAI Python API do not...

needs-dco
size/L

### Is your feature request related to a problem? Please describe. We are currently using Azure OpenAI to make API calls, seems that GPTCache only supports using the standard OpenAI...

### Current Behavior Seeing the following error when trying to initialize GPTCache with redis as the base cache and weaviate as the vector store `2023-10-30 16:29:24,502 - 139826724846464 - weaviate.py-weaviate:67...

### Is your feature request related to a problem? Please describe. I'm getting overlapping loggers between my service logger and your logger. ### Describe the solution you'd like. A quick...

### Current Behavior I am trying to integrate GPTCache with llama index but LLM predictor is not accepting cache argument , to fix this i have created a cacheLLMPredictor class...

### Current Behavior Hi, Love GPTCache, but it keeps logging all the calls in report table. I do not want that, and I do not know how to disable this....

weaviate-client won't be found by importlib.util.find_spec, should be searching for weaviate to check dependency installation instead Signed-off-by: Amrit Singh

dco-passed
size/XS

### Current Behavior when trying to deploy the code on dev, data_manager = get_data_manager(CacheBase("sqlite"), VectorBase("faiss", dimension=onnx.dimension)) above statement is giving error sqlite3.OperationalError: unable to open database file sqlite3.OperationalError: unable to...

### Current Behavior from gptcache.adapter.langchain_models import LangChainChat Traceback (most recent call last): File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/ld/miniconda3/envs/llm/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals)...