GPTCache icon indicating copy to clipboard operation
GPTCache copied to clipboard

[Bug]: Exception when using LangChain with GPTCache

Open dwillie opened this issue 9 months ago • 6 comments

Current Behavior

When following the LangChain instructions from the docs for a custom LLM I'm getting:

  File "gptcache/processor/pre.py", line 20, in last_content
    return data.get("messages")[-1]["content"]
           ~~~~~~~~~~~~~~~~~~~~^^^^
TypeError: 'NoneType' object is not subscriptable

I'm trying to follow the section below (from https://gptcache.readthedocs.io/en/latest/usage.html) but importantly I haven't included get_prompt or postnop as I don't know what those are (I can't see them anywhere in the doc).

I have tried using an older version of langchain and also using the dev branch of GPTCache, to avoid the metaclass issue and I'm getting this NoneType not subscriptable in both.

Code example excerpt from docs:

template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])

llm = OpenAI()

question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

llm_cache = Cache()
llm_cache.init(
    pre_embedding_func=get_prompt,
    post_process_messages_func=postnop,
)

cached_llm = LangChainLLMs(llm)
answer = cached_llm(question, cache_obj=llm_cache)

Hopefully I'm just doing something wrong. I've followed the instructions from LangChain to make my own custom LLM. (https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm)

Which appears to be working as expected.

class PromptHashLLM(LLM):
    @property
    def _llm_type(self) -> str:
        return "PromptHashLLM"

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        # Return SHA256 hash of the prompt as a string.
        return hashlib.sha256(prompt.encode("utf-8")).hexdigest()

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {}

Expected Behavior

I'd expect to get the response returned from the LLM and the cache populated

Steps To Reproduce

This script reproduces the error for me using the dev branch and langchain 0.0.332


import hashlib
from typing import Any, List, Mapping, Optional
from gptcache.embedding import Onnx
from gptcache.similarity_evaluation.onnx import OnnxModelEvaluation
from gptcache import Cache
from gptcache.adapter.langchain_models import LangChainLLMs
from gptcache.embedding import Onnx
from gptcache.manager import CacheBase, VectorBase, get_data_manager
from gptcache.similarity_evaluation.onnx import OnnxModelEvaluation

from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM

class PromptHashLLM(LLM):
    @property
    def _llm_type(self) -> str:
        return "PromptHashLLM"

    def _call(
        self,
        prompt: str,
        stop: Optional[List[str]] = None,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
        **kwargs: Any,
    ) -> str:
        # Return SHA256 hash of the prompt as a string.
        return hashlib.sha256(prompt.encode("utf-8")).hexdigest()

    @property
    def _identifying_params(self) -> Mapping[str, Any]:
        """Get the identifying parameters."""
        return {}


onnx = Onnx()
data_manager = get_data_manager(
    CacheBase("sqlite"), VectorBase(f"faiss", dimension=onnx.dimension)
)
cache = Cache()
cache.init(
    embedding_func=onnx.to_embeddings,
    data_manager=data_manager,
    similarity_evaluation=OnnxModelEvaluation()
)

question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
uncached_llm = PromptHashLLM()
print(uncached_llm(question))

cached_llm = LangChainLLMs(llm=uncached_llm)
print(cached_llm(question, cache_obj=cache))

Environment

python = "^3.11"
gptcache = { git = "[email protected]:zilliztech/GPTCache.git", branch = "dev" }
onnxruntime = "^1.16.1"
torch = "^2.1.0"
langchain = "^0.0.332"

Anything else?

No response

dwillie avatar Nov 09 '23 05:11 dwillie

@dwillie i will check it.

SimFG avatar Nov 09 '23 06:11 SimFG

Thank you @SimFG

dwillie avatar Nov 09 '23 06:11 dwillie

any solution to this? I am facing same error when passing it to retrieval stuff chain.


MODEL_TYPE=GPT4All
MODEL_PATH=r'C:\Users\komal\Desktop\mages\chatbot\llama-2-7b-chat.Q3_K_M.gguf'
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4

llm = LlamaCpp(model_path=MODEL_PATH, n_ctx=MODEL_N_CTX, n_batch=MODEL_N_BATCH, verbose=False)
d = 8

import numpy as np
def mock_embeddings(data, **kwargs):
    return np.random.random((d, )).astype('float32')
# get the content(only question) form the prompt to cache
def get_content_func(data, **_):
    return data.get("prompt").split("Question")[-1]
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=d)
data_manager = get_data_manager(cache_base, vector_base)
cache.init(embedding_func=mock_embeddings,
           data_manager=data_manager,
           similarity_evaluation=SearchDistanceEvaluation(),
           )
cached_llm=LangChainLLMs(llm=llm)

qa = RetrievalQA.from_chain_type(
    llm=cached_llm ,chain_type="stuff", retriever=retriever, return_source_documents=True,
    chain_type_kwargs={
        "prompt": PromptTemplate(
            template=template,
            input_variables=["context", "question"],
        ),
    },
)

image

Komal-99 avatar Feb 02 '24 11:02 Komal-99

any solution to this? I am facing same error when passing it to retrieval stuff chain.


MODEL_TYPE=GPT4All
MODEL_PATH=r'C:\Users\komal\Desktop\mages\chatbot\llama-2-7b-chat.Q3_K_M.gguf'
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4

llm = LlamaCpp(model_path=MODEL_PATH, n_ctx=MODEL_N_CTX, n_batch=MODEL_N_BATCH, verbose=False)
d = 8

import numpy as np
def mock_embeddings(data, **kwargs):
    return np.random.random((d, )).astype('float32')
# get the content(only question) form the prompt to cache
def get_content_func(data, **_):
    return data.get("prompt").split("Question")[-1]
cache_base = CacheBase('sqlite')
vector_base = VectorBase('faiss', dimension=d)
data_manager = get_data_manager(cache_base, vector_base)
cache.init(embedding_func=mock_embeddings,
           data_manager=data_manager,
           similarity_evaluation=SearchDistanceEvaluation(),
           )
cached_llm=LangChainLLMs(llm=llm)

qa = RetrievalQA.from_chain_type(
    llm=cached_llm ,chain_type="stuff", retriever=retriever, return_source_documents=True,
    chain_type_kwargs={
        "prompt": PromptTemplate(
            template=template,
            input_variables=["context", "question"],
        ),
    },
)

image

did you get any solution to this?

theinhumaneme avatar Mar 21 '24 07:03 theinhumaneme

@theinhumaneme you can use the inner cahche of the langchain, like:

from langchain.globals import set_llm_cache

set_llm_cache(GPTCache(init_gptcache))

more details: https://github.com/zilliztech/GPTCache/issues/585#issuecomment-1972720103

SimFG avatar Mar 21 '24 09:03 SimFG

@theinhumaneme you can use the inner cahche of the langchain, like:

from langchain.globals import set_llm_cache

set_llm_cache(GPTCache(init_gptcache))

more details: #585 (comment)

thank you @SimFG

theinhumaneme avatar Mar 21 '24 10:03 theinhumaneme