langchain icon indicating copy to clipboard operation
langchain copied to clipboard

DOC: The LLM type given in the Caching section is incorrect. It should be changed from ChatOpenAI to ChatGPT

Open luyuan1997 opened this issue 1 year ago • 2 comments

Issue with current documentation:

During the use of ChatOpenAI, it was found that even if langchain.llm_cache=True is set, the answer is different each time for the same question, such as "What is LangChain". Upon tracing the source code, it was discovered that ChatOpenAI inherited BaseChatModel, which does not support the caching logic. If the type of LLM switched from ChatOpenAI to ChatGPT, caching will be effective. Therefore, using ChatOpenAI in the LangChain Document example is incorrect, and should be replaced with ChatGPT. The corresponding address is: https://python.langchain.com/docs/modules/model_io/models/chat/how_to/chat_model_caching.

Idea or request for content:

It is suggested to modify the example and replace ChatOpenAI with ChatGPT. llm = ChatGPT()

luyuan1997 avatar Jul 04 '23 07:07 luyuan1997

The base chat model does seem to support cache. Am i missing something here?:
https://github.com/hwchase17/langchain/blob/81eebc40702ff676c2f62c42ab4c6732ff794164/langchain/chat_models/base.py

class BaseChatModel(BaseLanguageModel, ABC):
    cache: Optional[bool] = None ##Caching supported in base model
    verbose: bool = Field(default_factory=_get_verbosity)
    """Whether to print out response text."""
    callbacks: Callbacks = Field(default=None, exclude=True)
    callback_manager: Optional[BaseCallbackManager] = Field(default=None, exclude=True)
    tags: Optional[List[str]] = Field(default=None, exclude=True)
    """Tags to add to the run trace."""

rjarun8 avatar Jul 04 '23 17:07 rjarun8

Thank you very much for your reply. The new version of BaseChatModel in LangChain already supports caching, and using the new version has resolved my question. Thank you for your response.

luyuan1997 avatar Jul 10 '23 06:07 luyuan1997

Hi, @luyuan1997! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue you raised was about updating the documentation for LangChain to reflect the correct type of LLM from ChatOpenAI to ChatGPT in the caching section. It seems that the issue has been resolved as the new version of BaseChatModel in LangChain already supports caching. You even thanked the person who pointed out the support for caching in the base model.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your contribution to LangChain!

dosubot[bot] avatar Oct 09 '23 16:10 dosubot[bot]