langchain
langchain copied to clipboard
GPTCache keep creating new gptcache cache_obj
System Info
Langchain Version: 0.0.170 Platform: Linux X86_64 Python: 3.9
Who can help?
@SimFG
No response
Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
Reproduction
Steps to produce behaviour:
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# Avoid multiple caches using the same file, causing different llm model caches to affect each other
def init_gptcache(cache_obj: Cache, llm str):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm}")
langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm("tell me a joke")
print("cached:", langchain.llm_cache.lookup("tell me a joke", llm_string))
# cached: None
the cache won't hits
Expected behavior
the gptcache should have a hit