GPTCache
GPTCache copied to clipboard
Semantic cache for LLMs. Fully integrated with LangChain and llama_index.
Dear GPTCache Team, we are a security research group. We've used GPTCache for a while and impressed by its design and speed, but as we studied further, more concerns about...
I am utilizing the GPT Semantic Cache as outlined in the Langchain documentation, combined with the Groq API and the Llama3-70b-8192 model. However, I'm encountering an issue where the semantic...
When using the existing `_get_pattern_value` function for extracting the values i was getting the `IndexError: list index out of range` error. Code to Reproduce: ```python from langchain import PromptTemplate from...
### Current Behavior Using the default onnx model, **Score function** ``` def get_score(a, b): return evaluation.evaluation( { 'question': a }, { 'question': b } ) ``` **Case 1:** ``` a...
### What would you like to be added? In GPTCache/gptcache/adapter/adapter.py, after searching data from vector db, there is a for loop (line 379) to call get_scalar_data and evaluation method in...
I wanted to use openai's `text-embedding-3-small` model to generate embeddings, following is my test code: ```` os.environ['OPENAI_API_KEY'] = 'my_api_key' os.environ['OPENAI_API_BASE'] = 'my_api_base' openai_embed_fnc = OpenAI('text-embedding-3-small', api_key=os.environ['OPENAI_API_KEY']) vector_base = VectorBase('chromadb', dimension=1536,...
### Is your feature request related to a problem? Please describe. I faced a problem with phidata when using GPTCache. I can't integarate GPTCache with [phidata](https://github.com/phidatahq/phidata). ### Describe the solution...