langchain icon indicating copy to clipboard operation
langchain copied to clipboard

why llm_cache only supports in BaseLLM, not in BaseChatModel? I can't use llm_cache when using ChatOpenAI. instead, i can only use it by using OpenAI.

Open VVictorWang opened this issue 1 year ago • 1 comments

System Info

Langchain v0.0.200

I want to use GPTCache in my project based on langchain. But I find langchain.llm_cache only supports in BaseLLM, and it has no support in BaseChatModel. So I can't use llm_cache when using ChatOpenAI. instead, i can only use it by using OpenAI.

Related langchain source code: image

image

Who can help?

No response

Information

  • [X] The official example notebooks/scripts
  • [ ] My own modified scripts

Related Components

  • [X] LLMs/Chat Models
  • [ ] Embedding Models
  • [ ] Prompts / Prompt Templates / Prompt Selectors
  • [ ] Output Parsers
  • [ ] Document Loaders
  • [ ] Vector Stores / Retrievers
  • [ ] Memory
  • [ ] Agents / Agent Executors
  • [ ] Tools / Toolkits
  • [ ] Chains
  • [ ] Callbacks/Tracing
  • [ ] Async

Reproduction

no step.

Expected behavior

I want to use llm_cache in ChatOpenAI.

VVictorWang avatar Jun 15 '23 12:06 VVictorWang

Generated by a 🤖

Answer

The llm_cache is not supported in BaseChatModel because it is designed specifically for the BaseLLM class, which focuses on generating text completions with a single text input. In contrast, BaseChatModel, like ChatOpenAI, is designed for multi-turn conversations with a series of messages as input. The caching mechanism in BaseLLM is tailored for single-turn completion use cases and may not be suitable for multi-turn conversations. If you want to use llm_cache with ChatOpenAI, you would need to implement a custom caching mechanism compatible with the multi-turn conversation structure of the BaseChatModel class.

Sources


This response is meant to be helpful and save users time. It is not meant to be a precise solution.

Please leave a 👍 if this is helpful and 👎 if it is irrelevant

devstein avatar Jun 16 '23 01:06 devstein