litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Feature]: Semantic Cache Support GPTCache to OpenAI Proxy and python API

Open giyaseddin opened this issue 5 months ago • 1 comments

The Feature

This enhancement aims to extend LiteLLM's support for Semantic Cache by integrating GPTCache with Redis. Currently, LiteLLM interacts with various LLM APIs. By incorporating GPTCache, we seek to optimize data retrieval and storage, specifically within the OpenAI Proxy and Python API. This technical improvement aims to enhance performance and responsiveness, providing a more efficient caching mechanism for LiteLLM users. It's so easy to integrate it, it needs a memory vector db like Chroma config to be setup.

https://github.com/zilliztech/GPTCache

Motivation, pitch

It's a cost saving feature, and desired in all LLM use cases.

Twitter / LinkedIn details

No response

giyaseddin avatar Jan 21 '24 09:01 giyaseddin