crewAI icon indicating copy to clipboard operation
crewAI copied to clipboard

Question about CrewAI compatibility with LangChain caching

Open ItouTerukazu opened this issue 7 months ago • 1 comments

Description: I'm using CrewAI with LangChain and have noticed that the LangChain caching mechanism doesn't seem to be working as expected when used with CrewAI. I have the following questions:

  1. Is CrewAI designed to be compatible with LangChain's caching mechanism?
  2. Are there any known issues or limitations when using LangChain caching with CrewAI?
  3. Are there any specific configurations or settings required to enable effective caching when using CrewAI?
  4. If caching is not currently supported, are there plans to implement this feature in future versions?

Environment:

  • CrewAI version: 0.35.8
  • LangChain version:
langchain-anthropic                      0.1.13
langchain-cohere                         0.1.5
langchain-community                      0.0.38
langchain-core                           0.1.52
langchain-openai                         0.1.7
  • Python version: 3.11.9

Steps to reproduce:

  1. Set up LangChain caching using set_llm_cache(InMemoryCache())
  2. Create a CrewAI instance with agents and tasks
  3. Run the same crew operation twice
  4. Observe that the second run does not use the cache and makes a new LLM call

Expected behavior: The second run should use the cached results from the first run, significantly reducing execution time and avoiding a new LLM call.

Actual behavior: Both runs make separate LLM calls and take similar amounts of time to execute.

Any insights or guidance on this matter would be greatly appreciated. Thank you for your time and assistance!

ItouTerukazu avatar Jul 08 '24 04:07 ItouTerukazu